In this talk, we will examine how LLM outputs are evaluated by potential end users versus professional linguist-annotators, as two ways of ensuring alignment with real-world user needs and expectations. We will compare the two approaches, highlight the advantages and recurring pitfalls of user-driven annotation, and share the mitigation techniques we have developed from our own experience.
talk-data.com
Topic
llms
2
tagged
Activity Trend
19
peak/qtr
2020-Q1
2026-Q1
Top Events
AI Builders Summit 2025 | ODSC & Google Cloud event
9
Breaking Out of DemoLand : Ship It NYC (Event Not full - Join waitlist)
3
Prompting for Production: Ensuring the Quality of LLM Outputs in Product Feature
2
AI and Deep Learning for Enterprise #15
2
Virtual Summit: Generative AI and Intelligent Agents
2
How We Build High-Quality, User-Oriented LLM Features at Grammarly
2
Google I/O Extended 2023 North America
2
AI Meetup (June): GenAI, LLMs and ML
1
[AI Alliance] Better Expert Agents with Dana, Agent-Native Programming Language
1
Virtual Summit: LLMs and the Generative AI Revolution
1
AI Meetup (October): GenAI, LLMs and Agents
1
London Reactor Meetup
1
Filtering by:
How We Build High-Quality, User-Oriented LLM Features at Grammarly
×
How can we influence quality during the prompt creation stage, as well as how to work with already-generated text—improving it, identifying errors, and filtering out undesirable results. We'll explore linguistic approaches that help achieve better, more controlled outcomes from LLMs.