Contribute as part of a cross-functional team to all stages of design and development for complex, secure and performant data systems and solutions, including design, analysis, coding, testing, and integration of structured/unstructured data.
Work with business stakeholders with diverse backgrounds and make an impact to our commercial organization’s objectives.
Contribute to operational excellence objectives targeting reliability, quality, and release cadence, while perfecting operational economies of scale through extreme automation.
Ensure compliance of data architecture, systems, and products with data policies (incl. privacy), architecture, security and quality guidelines and standards; and
Participate in architecture evolution activities through innovation and adoption of new technologies.
Requirements & Skills:
Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or equivalent.
4+ years of experience in relevant roles.
Hands-on expertise with data engineering systems (Databricks, SQL engines), languages (Python, SQL) and frameworks (Apache Spark) to explore, mine, transform, and cleanse data.
Experience in designing, building and operating cloud-native products and solutions, with experience in at least one major cloud vendors (e.g. AWS).
Experience with modern software lifecycle management tools, including version control (git), test plan pyramids and test automation frameworks, continuous integration and deployment processes and tools.
Exposure in delivering software with agile project management tools and practices.