At Apiture, our mission is to empower financial institutions to know and serve their clients with the care of a traditional community institution at the scale, speed, and efficiency required in today’s digital world. With more than 300 clients throughout the U.S., we deliver comprehensive online and mobile solutions that support banks and credit unions, ranging from small community financial institutions to new, innovative direct banks.

 

Position Summary:  

Reporting to the Data Engineering Manager, the Data Architect will be a hands-on team member that will drive technology standards for data intelligence. You will be working with a team of highly talented data engineers to develop practical, scalable data reporting and analytics solutions. 

 

Location (Wilmington, NC, Austin, TX, Remote):

We have offices in Wilmington, NC and Austin, TX and while some positions need to be office based, we will consider remote candidates depending on their time zone.

 

Responsibilities:  

  • Serve as core technical resource for the data intelligence platform
  • Design, develop and execute data solutions to address business issues  
  • Design and implement efficient data pipelines to integrate data from a variety of internal and external sources into the data warehouse 
  • Design APIs to expose data sets and data models to internal/external applications 
  • Evaluate and recommend tools, technologies, and processes to ensure the highest quality data platform  
  • Review design documents with the data engineering team and additional stakeholders
  • Work with the information security and compliance teams to design solutions to monitor and enforce data cataloging, asset tracking, and privacy rules/metrics
  • Define and enforce data warehouse standards that align with the larger data management guidelines in place
  • Collaborate with the product development and data science teams to produce cutting edge data solutions 
  • Collaborate with other Apiture architects to ensure the data platform design is cohesive with other solutions
  • Interpret business requirements to articulate the business needs to be addressed
  • Troubleshoot code level problems quickly and efficiently
  • Mentor data engineers in data pipeline development and execution
  • Possess keen awareness and ability to stay in touch with upcoming technical trends and internal roadmaps 

 

Requirements:  

  • Bachelor’s in Computer Science or equivalent work experience. 
  • Passionate self-starter who can be visionary, detailed-oriented, and an enthusiastic team player 
  • 6+ years of experience in data engineering, data analytics and/or developing large database systems 
  • 4+ years of experience with programming languages such as Python, Java, Scala, etc. 
  • 2+ years of experience around system design and architecture 
  • Experience in designing and building modern services on AWS
  • Well-versed with Advanced SQL scripting
  • Hands-on experience with data platforms like Snowflake or RedShift
  • Understanding of how to work with structured, unstructured, and semi-structured data
  • Excellent understanding of ETL/ELT fundamentals and building efficient data pipelines
  • Excellent verbal and written communication skills 

 

Nice To Have:  

  • Experience writing code to manage data management tasks  
  • Experience with AWS technologies like Lambda, DMS, etc. 
  • Experience with Data Management platforms like OneTrust, OpenRaven, etc.
  • Experience with Data Visualization tools like Domo, Tableau, PowerBI, etc.
  • Experience with REST APIs, Streaming APIs, or other data ingress techniques
  • Experience with data engineering and monitoring for ML applications
  • Exposure to test-driven development and automated testing frameworks
  • Experience with the Atlassian suite of tools, including Confluence, Jira, and Bitbucket
  • Fintech experience is a huge plus.