Who We Are MAS Global is a purpose-driven digital engineering partner, delivering scalable, high-impact software solutions for global enterprises and startups alike. With a nearshore, agile delivery model and a focus on quality, transparency, and innovation, we help our clients drive digital transformation with speed and impact. Who You Are You are a seasoned Data Engineer or Data Architect with deep expertise in designing, building, and optimizing scalable data pipelines, architectures, and systems. You’re passionate about data integrity, reliability, and enabling advanced analytics. You enjoy working cross-functionally and thrive in a fast-paced, agile environment. Your Responsibilities Design and implement modern data architectures (cloud and hybrid), focusing on scalability, performance, and reliability. Develop and maintain robust data pipelines for ETL/ELT processes across structured and unstructured data sources. Define and enforce best practices in data modeling, data governance, and data quality. Collaborate with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions. Leverage cloud-native services (e.g., AWS, Azure, GCP) for building data platforms and tools. Contribute to the development of enterprise data strategies, including architecture roadmaps and migration plans. Ensure compliance with security, privacy, and regulatory requirements. Required Skills and Experience 6+ years of experience in data engineering, with at least 2 years in a data architecture role. Strong expertise in designing data lakes, data warehouses, and real-time streaming architectures. Hands-on experience with cloud platforms (AWS preferred; Azure or GCP also valued) and tools such as S3, Redshift, Glue, Lambda, EMR, Snowflake , etc. Proficiency in SQL , Python , and Spark (or similar distributed processing frameworks). Deep understanding of data modeling , data warehousing (e.g., Kimball, Data Vault), and data governance frameworks. Experience with orchestration tools (e.g., Airflow, dbt) and CI/CD for data. Familiarity with containerization (Docker) and infrastructure as code (e.g., Terraform) is a plus. Strong communication and stakeholder management skills. Nice to Have Knowledge of MLOps or experience supporting machine learning pipelines. Experience with streaming platforms like Kafka or Kinesis. Previous consulting or client-facing experience. Certifications in cloud or data engineering (e.g., AWS Data Analytics Specialty, Google Data Engineer).