talk-data.com talk-data.com

Event

Ensuring the Quality of LLM Output at Grammarly: An Overview and Case Study

2024-06-06 – 2024-06-06 Meetup Visit website ↗

Activities tracked

1

On June 6, join our linguists for an in-depth exploration of how we measure the LLM output quality at Grammarly and our approaches to making the most of LLM usage.

✅ Registration: to attend the meetup, please register ➡️ here ⬅️

🔈 Speakers: - Lena Nahorna\, Analytical Linguist - Ada Melentyeva\, Computational Linguist

🚀 LLMs have opened up new avenues in NLP with their possible applications, but evaluating their output introduces a new set of challenges. In this talk, we discuss these challenges and our approaches to measuring the model output quality. We will talk about the existing evaluation methods and their pros and cons and then take a closer look at their application in a practical case study.

🔈 BIO: Lena Nahorna—analytical linguist at Grammarly. Worked on correctness, responsible AI, and strategic suggestions. Background: PhD in linguistics.

Ada Melentyeva—сomputational linguist at Grammarly. Worked on inclusive language, fluency, and injecting organizational knowledge into Grammarly suggestions and correctness. Works on a library for metric-based evaluation of prompt output.

Agenda: ✨ 18:30–19:00: Welcome and networking ✨ 19:00–20:00: Grammarly talk ✨ 20:00–21:00: Mingle with the Grammarly team

✅ Where: In-person, Grammarly Berlin hub ✅ When: Thursday, June 6 ✅ Language: English ✅ Use this link to register: https://gram.ly/3WPBllJ

The event is free. Registration is mandatory. Due to a limited number of seats, the invites will be sent to a limited number of interested guests on a first registered, first invited basis. Please check your inbox for a confirmation email about your attendance.

Sessions & talks

Showing 1–1 of 1 · Newest first

Search within this event →

Measuring LLM Output Quality at Grammarly

2024-06-06
talk
Lena Nahorna (Grammarly) , Ada Melentyeva (Grammarly)

LLMs have opened up new avenues in NLP with their possible applications, but evaluating their output introduces a new set of challenges. In this talk, we discuss these challenges and our approaches to measuring the model output quality. We will talk about the existing evaluation methods and their pros and cons and then take a closer look at their application in a practical case study.