- Bachelors or Masters degree in Computer Science, Information Technology, or related field
- 36 years of experience in Data Engineering or related roles
- Hands-on experience with big data processing frameworks and data lakes
- Proficiency in Python, SQL, and PySpark for data manipulation
- Experience with Databricks and Apache Spark
- Knowledge of cloud platforms like Azure and AWS
- Strong understanding of distributed systems and big data technologies
- Basic understanding of DevOps principles and CI / CD pipelines
Hands-on experience with Git, Jenkins, or Azure DevOps