Function as a company-wide thought leader and subject-matter-expert on Databricks
Provide technical leadership to guide enablement initiatives across the Databricks landscape
Work with subject-matter-experts and learning architects to scope needs for enablement material
Grow technically in areas such as lakehouse technology, big data streaming, and big data ingestion and workflows by working regularly in the Databricks DI Platform
Requirements & Skills:
Passion/experience for sharing knowledge and expertise to enable others
5+ years experience in a technical role with expertise in at least one of the following:
Experience maintaining and extending production data systems to evolve with complex needs
Expertise in scaling big data workloads (such as ETL) that are performant and cost-effective and large-scale data ingestion pipelines and data migrations – including CDC and streaming ingestion pipelines
Expert with cloud data lake technologies such as Delta
Bachelor’s degree in Computer Science, Information Systems, or Engineering, in a quantitative discipline, or equivalent experience through work experience
Production programming experience in SQL and Python
Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike
Passion for collaboration, life-long learning, and driving business value through ML
[Preferred] Experience working with Apache Spark to process large-scale distributed datasets