Data Engineer We're looking for a skilled Data Engineer to join our team. As a Data Engineer, you'll play a crucial role in building, maintaining, and optimizing data pipelines that support our genealogy data products. You'll work closely with our team to design, build, and maintain scalable, reusable pipelines to process large genealogy data sets. You'll analyze source data, collaborate with stakeholders, and propose improvements to ensure our data services are reliable and efficient. Some of the key responsibilities include: - Designing and building RESTful APIs to expose data and services to other teams and systems - Writing production-ready code and following good engineering practices (testing, peer reviews, CI/CD) - Troubleshooting issues and providing support to internal teams that depend on our data services To succeed in this role, you'll need: - 1-2 years of experience using Python in a production environment - Hands-on experience or familiarity with Airflow (DAGs, operators, scheduling) - Experience working with Docker (or other OCI-compatible container framework) Bonus skills include: - Experience with AWS services such as S3, IAM, and other core cloud infrastructure concepts - Understanding of Kubernetes or interest in container orchestration - Experience using GitHub Actions or similar CI/CD tools We offer a dynamic work environment, opportunities for growth and development, and a competitive compensation package. If you're passionate about working with data and want to be part of a high-growth company, we'd love to hear from you.