[U083] - SR. DATA ENGINEER

Flow Rms


**How to Apply** To be considered for this position, you must record a short video explaining why you are excited and qualified for this role. You can use a free tool like Loom to create your video. **Please send us a message on Indeed with a link to your video.** **Flow RMS Overview** Flow RMS is transforming the manufacturing sales rep and distributor industry with our AI-powered SaaS platform, optimizing sales operations and data-driven decision-making. We are developing a flexible, scalable, and AI-enhanced reporting framework that enables dynamic filtering, aggregation, and visualization across multiple data entities. We are looking for a Senior Data Engineer who is an expert in Python to lead our data architecture, streamline ETL processes, and integrate manufacturing systems into our platform. This is a critical role, with a focus on building data pipelines, optimizing performance, and creating a self-service reporting suite that integrates seamlessly with our dashboards. As a Senior Data Engineer, your primary responsibility will be designing, building, and optimizing data infrastructure using Python. You will work on high-performance data pipelines, enable complex reporting capabilities, and create dynamic data integrations between manufacturers' systems and Flow RMS. This role is ideal for someone who understands data at scale, can build efficient Python-based ETL pipelines, and enjoys tackling complex data modeling and optimization challenges. **Key Responsibilities** - Python-Based Data Engineering: Build, optimize, and maintain high-performance data pipelines and processing frameworks using Python. - ETL Development & Automation: Design and execute robust, scalable ETL workflows for cleaning, transforming, and integrating data. - Database Optimization: Manage and optimize PostgreSQL databases for performance, indexing, and query efficiency. - API & Data Integrations: Develop GraphQL and FastAPI endpoints to connect manufacturer systems with our reporting framework. - Reporting & Data Frameworks: Expand our reporting system to support advanced filtering, sorting, aggregation, and visualization for dashboards and infographics. - Prototyping & Proof-of-Concepts: Rapidly build and test Python-based data prototypes to validate new product features. - Tenant-Specific Solutions: Support custom data requests from customers, with the potential to offer custom reporting as a revenue service. - Scalability & Future Roadmap: Help design the next generation of reporting tools, including a self-service report builder. **Qualifications** - Python Expertise (MUST-HAVE): 5+ years of Python development experience, with a focus on data engineering, analytics, or ETL processing. - Data Pipeline Development: Proven experience building high-performance ETL processes in Python. - Database Management: Strong knowledge of PostgreSQL, including query optimization, indexing, and performance tuning. - API Development: Hands-on experience with FastAPI and GraphQL for data access and system integrations. - Data Modeling & Transformation: Ability to design scalable, normalized data schemas and handle complex relational data. - Agile Development: Experience working in an Agile/Scrum environment with rapid iterations. **Preferred Skills** - Docker & Containerization: Experience using Docker to manage development environments. - Kubernetes (Optional): Hands-on experience with Kubernetes for scalable deployments. - Cloud & Big Data: Familiarity with AWS, GCP, or Azure for cloud-based data solutions. - Streaming & Real-Time Data: Experience with Kafka, WebSockets, or event-driven architectures. If you are a Python-focused Data Engineer passionate about building high-performance data pipelines, optimizing complex data systems, and developing AI-enhanced reporting tools, we’d love to hear from you!

trabajosonline.net © 2017–2021
Más información