Job Overview We're on a mission to create a data-driven culture where everyone feels empowered to be their best. As a key member of our team, you will play a pivotal role in designing and developing data pipelines that drive business growth. Key Responsibilities: - Design and implement robust data extraction architectures and infrastructure for seamless data sourcing. - Merge newly discovered data with existing data sources to unlock new insights. - Identify opportunities to automate processes and improve efficiency. - Develop optimal data pipeline architectures that meet the needs of our organization. - Assemble complex data sets that meet functional requirements. - Collaborate with cross-functional teams to resolve related technical issues. - Ensure data integrity and security across geographic boundaries. Requirements: To succeed in this role, you will need: - At least 3 years of experience in data engineering domain. - Strong knowledge of data ETL and processing techniques. - Proficiency in working with relational databases (e.g., PostgreSQL, MySQL) and understanding of relational storage concepts, with the ability to write and optimize SQL queries. - Experience with Python and Apache Airflow. - Experience working with services in Amazon Web Service data ecosystem. - Organized and structured working process. - Able to agree to and deliver to realistic deadlines. - Excellent interpersonal and relational skills. - Strong agile mindset, able to iterate fast and give early feedback. Benefits: We offer: - Competitive salary package, share plan, referral bonus. - Career coaching, global career opportunities, non-linear career paths, internal development programs for management and technical leadership. - Complex projects, rotations, internal tech communities, training, certifications, coaching, online learning platforms subscriptions, pass-it-on sessions, workshops, conferences. - Hybrid work and flexible working hours, employee assistance program. - Global internal wellbeing program, access to wellbeing apps.