Gaining direct experience in LLM and LLM-relevant product performance measurement, including creating metrics, designing and conducting tests, to help tech and product teams to understand LLM and product performance.
Delivering performance and feedback analysis to identify issues in LLM training and product design.
Collaborating with product managers and engineers to build testing workflows, create feedback loops, and enhance the effectiveness of testing processes and standards.
Working closely with the annotation team to gauge the quality of training data sets.
Requirements & Skills:
Professional proficiency in English with excellent writing and critique skills.
Critical thinking and analytical skills are required.
Comfort with meeting deadlines and generating creative solutions, including the use of technology and tools, to enhance the quality of both individual and team outputs.
Experience in consulting, market research, auditing, operational process design, and improvement is valued. Experience as an NLP model and search product performance evaluator is strongly preferred.
A deep interest in LLM is essential.
The ideal candidate is an enthusiastic learner who finds engagement with diverse case studies stimulating.