About the team
We are experts in data, working to make it cost-effective, understandable, and trustworthy. We build pipelines processing billions of events a day and are stewards of canonical data warehouses and datasets delivering products for Company Users while embedding with teams to build their data products. We are experts in using the Company Data Platform and to scale we lead the data culture and data education to enable product teams to own their data. We invest in [AI] Data Ops to scale incident handling and serve as an escalation path for data incidents to minimize their impact. The Data Engineering Solutions team will work closely with product teams delivering trustworthy data / backend code / and innovative (AI) tools/platforms/services for data.
We're looking for a person who could drive the Data Engineering Solutions Team in solving high-impact, cutting-edge data problems. The ideal candidate will be someone that has built data pipelines for large scale volume, is deeply knowledgeable of Data Engineering tools including Airflow/Spark/Kafka/Flink, is empathetic, excels at building strong relationships, and collaborates effectively with other Company teams to understand their use cases and unlock new capabilities. You will:
- Lead the technical outcomes for a team of ambitious, talented engineers, providing mentorship, guidance, and support to ensure their success.
- Partner with our recruiting team to attract and hire top talent.
- Deliver cutting-edge data pipelines that scale to users' needs, focusing on reliability and efficiency.
- Develop strong subject matter expertise and manage the SLAs of data pipelines and full stack web applications that support critical stakeholders.
- Collaborate with product managers and peers across the company to create/improve canonical datasets and data warehouses, use golden paths, and ensure Companys and customers are using trustworthy data.
- Leverage AI/LLM and Agents at scale to produce and analyze high-quality data on ambiguous problems
- Have an opportunity to work with Spark, Flink, Kafka, Trino, Pinot, Airflow, Scala, Java, SQL, and Python and many other big data technologies.
- Have the opportunity to drive the execution of key data initiatives for Company, overseeing the entire development lifecycle from planning to delivery while maintaining high standards of quality and timely completion.
- Foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.
- Who you are
- We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.
- We’re looking for someone who has:
- 10+ Years of engineering experience with 5+ Years of hands-on experience building+operating data systems/pipelines, datasets/data warehouses, infrastructure, and leading small teams to deliver exceptional solutions.
- A strong engineering background and passion for data.
- Prior experience with writing and debugging data pipelines using a distributed data framework (Spark / Hadoop / Trino / etc)
- An inquisitive nature in diving into data inconsistencies to pinpoint issues, and resolve deep rooted data quality issues
- Knowledge of a backend development language (such as Scala, Java, or Go) and strong SQL experience
- Extreme customer focus, with a commitment to partnering with Product Managers, leaders, and other Company engineers to understand their use cases.
- Effective cross-functional collaboration, with the ability to think rigorously, communicate clearly, and make or coordinate difficult decisions and trade-offs.
- Thrive with high autonomy and responsibility in an ambiguous environment.
- Ability to foster and work in a healthy, inclusive, challenging, and supportive work environment.
- Expertise in Iceberg, Kafka, Change Data Capture, Flink, Spark, Airflow, Hive Metastore, Pinot, Trino, AWS Cloud, and influencing open-source contributions is a plus.
- Experience creating and maintaining Data Marts / Data Warehouses to power business reporting needs
- Experience working with Product or Go-To-Market (GTM - Sales/Marketing) teams
- Genuine enjoyment of innovation and a deep interest in understanding how things work, with the ability to question and direct architectural decisions.
- Strong written and verbal communication skills for various audiences, including leadership, users, and company-wide.