Build the infrastructure required to support data pipelines across a wide variety of data sources using Python, Snowflake, DBT, Airbyte, and other technologies
Standardizing data modelling across our data platform
Own incidents and support tickets, proactively managing and resolving them within SLAs
Continuous deployment, operation, and monitoring of your data solutions and their infrastructure by applying DevOps practices, SDLC, CI/CD principles
Build complex data models in line with stakeholder requirements
Support our Data Science teams with cutting-edge AI/MLOps pipelines
Requirements & Skills:
2+ years of experience working with SQL and database skills including RDBMS and NoSQL Databases
Practical experience with AWS services (S3, EC2, IAM, EKS, etc.)
Solid Python and scripting skills
Good understanding of data modeling and architecture principles (experience with DBT would be awesome!)
Team player, enthusiasm for learning new tech and jumping into a diverse range of projects