- Bachelor's degree, OR 3+ years of relevant work experience
- Master's or bachelor's degree in computer science or related fields required
- Demonstrated ability in delivering high-quality, large-scale, enterprise-class data applications
- Expertise in big data engineering with knowledge of Hadoop, Apache Spark, Python and SQL
- Proficiency in creating and managing large-scale data pipelines and ETL processes
- Experience developing and maintaining Spark pipelines and productizing AI/ML models
- Proficient in technologies like Kafka, Redis, Flink
- Skilled in Unix or Python scripting and scheduling tools like Airflow and Control‑M
Experience with data storage technologies and databases (e.g. MySQL, DB2, MS SQL)