Large language models can 'hallucinate' factually incorrect outputs, presenting significant risks for their adoption to high-stakes applications. Jannik will present joint work recently published in Nature on detecting hallucinations in large language models using semantic entropy, which mitigates hallucinations by quantifying the model's own uncertainty over the meaning of generations. He will also discuss a recent pre-print that proposes a method to drastically reduce the cost of uncertainty quantification in LLMs by predicting semantic entropy from latent space, and he may ramble about uncertainties in LLMs more generally.
talk-data.com
Topic
natural language processing
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
Top Events
WEBINAR "Introduction to AI in Robotics"
1
#22 AI Series: Meta AI - J. Kossen
1
Ask Your Data Anything Using Natural Language — Live Demo
1
Introduction to AI; What, Why and How?
1
WEBINAR "Introduction to AI in Robotics"
1
Translating Electronic Documents: A low-resource story
1
WEBINAR "Introduction to AI in Robotics"
1
Microsoft Copilot for Power BI Masterclass: AI Meets Analytics
1
WEBINAR "Introduction to AI in Robotics"
1
Filtering by:
#22 AI Series: Meta AI - J. Kossen
×