Partner with production teams to develop performant, reliable inference solutions for their products.
Partner with research teams to bring their research work into production.
Define and execute technical roadmaps to support model deployment and inference at scale.
Provide strong technical leadership, focusing on applied AI solutions for product inference needs, while ensuring high-performing, collaborative team dynamics.
Requirements & Skills:
They are highly technical.
Have 10+ years of experience in ML model serving, ML infrastructure, or applied AI, with 5+ years in engineering management.
Possess a strong background in ML inference, production AI systems, and distributed systems.
Have experience in building high-scale, high-reliability production AI systems.
Are an effective collaborator, capable of aligning technical, research, and business stakeholders to drive impactful solutions.
Have experience leading high-performing teams with a focus on scaling and fostering a culture of performance.
Thrive in fast-paced environments with evolving priorities and ambitious goals, with the ability to work closely with technical and non-technical stakeholders.