Machine Learning Engineer Opportunity We are seeking a skilled Lead Machine Learning Engineer to join our remote team. As a key member of the engineering team, you will contribute to the design, development, and operation of our machine learning pipeline. The ideal candidate will have a strong background in programming languages such as Python, with expertise in SQL and data manipulation. Experience with cloud-based services like AWS, GCP, and Azure is highly valued. A minimum of 5 years' experience in software development is required, along with a proven history of leading and mentoring teams. Demonstrable MLOps experience using platforms such as Sagemaker, Vertex, or Azure ML is essential for this role. Key Responsibilities: - Contribute to the design, development, and operation of the machine learning pipeline based on industry best practices. - Design and implement scalable data preparation pipelines, collaborating with data scientists to translate predictive models into production. - Establish and configure pipelines for various projects, ensuring seamless integration with existing infrastructure. - Continuously identify technical risks and gaps, formulating mitigation strategies to ensure optimal pipeline performance. Requirements: - A minimum of 5 years' experience with a programming language, preferably Python, along with a strong knowledge of SQL. - Proven history of leading and mentoring an engineering team. - Demonstrable MLOps experience (Sagemaker, Vertex, or Azure ML). - Intermediate proficiency in Data Science, Data Engineering, and DevOps Engineering. - Record of at least one project delivered to production in an MLE role. - Expertise in Engineering Best Practices. - Practical experience implementing Data Products using Apache Spark Ecosystem or equivalent technologies. - Familiarity with Big Data technologies (e.g., Hadoop, Spark, Kafka, Cassandra, GCP BigQuery, AWS Redshift, Apache Beam, etc.). - Proficiency with automated data pipeline and workflow management tools such as Airflow, Argo Workflow, etc. - Experience with different data processing paradigms (batch, micro-batch, streaming). - Practical experience working with a major Cloud Provider such as AWS, GCP, and Azure. - Experience integrating ML models into complex data-driven systems. - DS experience with Tensorflow/PyTorch/XGBoost, NumPy, SciPy, Scikit-learn, Pandas, Keras, Spacy, HuggingFace, Transformers. - Experience with various types of databases (Relational, NoSQL, Graph, Document, Columnar, Time Series, etc.). - Fluency in English communication at a B2+ level. Nice to Have: - Practical experience with Databricks MLOps-related tools/technologies like MLFlow, Kubeflow, TensorFlow Extended (TFX). - Experience with performance testing tools such as JMeter or LoadRunner. - Knowledge of containerization technologies like Docker.