Loading
Hire.Monster

Senior Data Engineer - Project Delivery Specialist II

St. Louis, Missouri, US
ГибридАналитикаСША

Обязанности

You will support the client in person by being in the office for 4 working days, there is a need for specialized expertise to design, build, and maintain data pipelines and infrastructure, ensuring reliable access to high-quality data for business operations and analytics

  • You will accelerate the development and stabilization of critical data teams
  • This will improve data quality, ensure timely insights, optimize system performance, and directly enable business teams to make better, data-driven decisions
  • Ultimately, the role is key to achieving smoother operations, regulatory compliance, and unlocking greater value from data assets
  • Resolve pipeline failures, perform root cause analysis, and apply fixes promptly
  • Build and maintain scalable data pipelines for ingesting, transforming, and delivering data using Databricks (PySpark) and Azure Data Factory
  • Connect multiple data sources (e.g., databases, APIs, snowflake, salesforce and SAP) and ensure reliable, automated data flows
  • Implement data validation, error handling, and monitoring to ensure data accuracy and completeness
  • Tune Spark jobs and data storage (e.g., Delta Lake) for cost-effective, efficient performance

Communicate regularly with Engagement Managers (Directors), project team members, and representatives from various functional and / or technical teams, including escalating any matters that require additional attention and consideration from engagement management Independently and collaboratively lead client engagement workstreams focused on improvement, optimization, and transformation of processes including implementing leading practice workflows, addressing deficits in quality, and driving operational outcomes We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms

Требования

  • 7+ years working experience with Databricks experience: Spark (PySpark/Scala), Delta Lake, notebook orchestration, job clustering
  • 5+ years working with data pipeline orchestration with Azure Data Factory or Synapse Pipelines
  • 5+ years working experience using SQL and advanced data querying/analysis techniques
  • 5+ years building CI/CD pipelines for data workloads using Azure DevOps or GitHub Actions
  • 5+ years working experience with Power BI or other visualization tools for data validation
  • 2+ years working experience with containerization (Docker/Kubernetes) and microservices concepts as applied to data solutions
  • Limited immigration sponsorship may be available
  • Ability to travel 10%, on average, based on the work you do and the clients and industries/sectors you serve
  • Position requires 4 days onsite at the client in St. Louis

Must be able to start by March 16 th , 2026

Навыки

  • 7+ years working with Azure Platform components: Data Factory, Databricks, Synapse Analytics, Data Lake Storage, Event Hubs
  • Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law

Опубликовано: 12.01.2026