Senior Data Engineer (Microsoft Azure Fabric experience REQUIRED) Overview We are seeking an initiative-taking Sr. Data Engineer with expertise in Azure Fabric and the Microsoft stack. The main goal of this role is to design, implement, and maintain cloud data solutions to collect, store, process, and transform large volumes of data into business insights for internal and customer-facing applications. The Senior Data Engineer will build and test data pipeline architectures within Azure Fabric environments, identify, design, and implement internal process improvements to automate manual processes, optimize data delivery, and redesign infrastructure for greater scalability. The role focuses on data ingestion, extracting data from diverse sources, and making it available for analytics and reporting using Azure Fabric, Python, Data Factory, and notebooks. Essential Duties and Responsibilities Build systems for collection and transformation of complex data sets for use in production systems. Design, develop, and maintain data pipelines for reporting, optimization, and data collection. Collaborate with engineers on building and maintaining back-end services. Implement data schema and data management improvements for scalability and performance. Develop data management solutions involving internal and external data sources. Identify, design, and implement data flow and process improvements, including re-designing data infrastructure for scalability, optimizing data delivery, and automating manual processes. Build data infrastructure for optimal extraction, transformation, and loading of data from various sources using Azure Fabric, SQL tools, and technologies. Work on cloud-based big data automation and orchestration solutions such as Azure DevOps. Collaborate with stakeholders to understand user requirements. Participate in code reviews with team members. Lead and mentor other technology team members. Knowledge, Skills, and Abilities Ability to prioritize tasks effectively. Problem-solving skills and a logical approach to work. Good organizational, communication, and interpersonal skills. Meticulous attention to detail. Experience building and optimizing data sets, data pipelines, and architectures. Ability to perform root cause analysis on data and processes to identify improvements. Strong analytical skills working with structured and unstructured data. Ability to build processes supporting data transformation, workload management, and metadata management. Understanding of data lifecycle, data retention policies, and storage management techniques. Qualifications and Experience Bachelor’s degree or equivalent in Computer Science, Engineering, Mathematics, Statistics, or related field. 5+ years of industry experience in supply chain, distribution, logistics, procurement, CRM, or finance. 5+ years of experience with data warehouse/lake infrastructure using Microsoft stack including Data Factory, Delta Lake, Notebooks, Azure Cloud, etc. At least 5 years of programming experience in Python. Experience with Microsoft Fabric and notebooks. Experience with Medallion Architecture (bronze, silver, gold). Understanding of relational, non-relational, and dimensional data modeling. Experience with ETL/ELT tools and data integration services. Knowledge of Azure Cloud management and computing. Knowledge of AWS, Matillion, and Informatica ETL tools is a plus. Experience with CI/CD tools like Git and Azure DevOps. Experience working with cloud databases such as PostgreSQL, DynamoDB, Athena, Redshift (3+ years). Advanced knowledge of database security, backup/recovery, data integrity, performance tuning, and monitoring. Seniority Level Mid-Senior level Employment Type Contract Job Function Information Technology Industries Software Development #J-18808-Ljbffr