Lead and manage a team of data engineers responsible for building, maintaining, and optimizing data pipelines and data systems.
Drive the development and execution of the data engineering roadmap, ensuring that the data platform is scalable, secure, and reliable.
Collaborate with Product, Engineering, and Machine Learning teams to deliver high-quality data solutions that meet the evolving needs of the business.
Develop best practices for data governance, security, and infrastructure as code, ensuring robust and sustainable systems.
Oversee the design, development, and deployment of advanced ETL pipelines and data models that drive business insights and machine learning models.
Lead the migration of legacy data systems to modern cloud-based infrastructure, leveraging AWS services to optimize performance and scalability.
Foster a culture of continuous improvement, where team members are encouraged to learn, grow, and innovate.
Requirements & Skills:
6+ years of data engineering experience, with at least 2+ years in a management or leadership role.
Proven experience designing and optimizing large-scale data systems and pipelines in the cloud, especially with AWS.
Expertise in building and managing lake house architecture, data lakes, data warehouses, and ETL pipelines using tools like AWS Lambda, Glue, EMR, Postgres, and Redshift.
Strong experience with Python, SQL, and modern data engineering tools such as Airflow, DBT, and Spark.
Hands-on experience with containerization and orchestration technologies like Docker and Kubernetes.
A strong understanding of data governance, security, and best practices for building scalable cloud-based infrastructure.
Excellent leadership and team-building skills, with the ability to mentor and guide technical teams.
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience.