During the session, we’ll discuss the challenges that prompt engineering has presented, both when it first gained popularity and as it continued to evolve. We’ll share how these challenges informed the development of our prompt engineering tooling and workflows. We’ll cover: Standardizing communication with LLMs; Using templating to customize prompts; Building prompt-centric production workflows; Working with structured LLM output; Ensuring the quality of LLM output; Creating tooling that supports our prompt engineering workflows.
talk-data.com
Speaker
Ada Melentyeva
2
talks
Ada Melentyeva — Computational linguist at Grammarly. Worked on inclusive language, fluency, and injecting organizational knowledge into Grammarly suggestions and correctness. Works on a library for metric-based evaluation of prompt output.
Bio from: The Evolution of Prompt Engineering Tooling at Grammarly
Filter by Event / Source
Talks & appearances
2 activities · Newest first
LLMs have opened up new avenues in NLP with their possible applications, but evaluating their output introduces a new set of challenges. In this talk, we discuss these challenges and our approaches to measuring the model output quality. We will talk about the existing evaluation methods and their pros and cons and then take a closer look at their application in a practical case study.