At Apiture, our mission is to empower financial institutions to know and serve their clients with the care of a traditional community institution at the scale, speed, and efficiency required in today’s digital world. With more than 300 clients throughout the U.S., we deliver comprehensive online and mobile solutions that support banks and credit unions, ranging from small community financial institutions to new, innovative direct banks.


Location (Wilmington, NC, Austin, TX, Remote):
We have offices in Wilmington, NC and Austin, TX and while some positions are office based, we will also consider remote candidates depending on their time zone.



  • Design and implement efficient data pipelines in order to integrate data from a variety of internal and external sources into Apiture’s Data Warehouse.
  • Work with the Engineering Lead to define and enforce data warehouse standards that align with the larger data management guidelines in place.
  • Work with Data Scientists, Data Analysts, and Machine Learning Engineers to transform and refine data sets until they meet analysis goals.
  • Code to APIs to be able to connect and ingest data from external/partner platforms.
  • Build homegrown APIs to expose data sets and data models to internal/external applications.
  • Identify and pursue opportunities to automate workflows and execute testing/validation strategies to maintain high standards of efficiency and data quality.
  • Build and maintain documentation around data sets, data classes, data flows, transformations, etc.
  • Work with the information security and compliance teams at Apiture to build, monitor, and enforce data cataloging, asset tracking, and privacy rules/metrics.
  • Provide input and feedback to support continuous improvement in data governance processes.



  • Bachelor's Degree in Computer Science or equivalent experience.
  • 2+ years of hands-on experience with developing data warehouse solutions that involved large data sets.
  • Experience with programming languages: Python, Java, Scala, etc.
  • Well-versed with Advanced SQL scripting.
  • Hands-on experience with platforms like Snowflake and RedShift.
  • Excellent understanding of ETL/ELT fundamentals and building efficient data pipelines.
  • Excellent verbal and written communication skills.


Nice To Have:

  • Experience with AWS technologies like Lambda, DMS, etc.
  • Experience with Data Management platforms like Domo, OpenTrust, etc.
  • Experience working with REST APIs, Streaming APIs, or other Data Ingress techniques.
  • Experience with data engineering and monitoring for ML applications.
  • Exposure to test-driven development and automated testing frameworks.