talk-data.com talk-data.com

Filter by Source

Select conferences and events

Showing 3 results

Activities & events

Title & Speakers Event

In this series, we dive deep into our most popular, fully-featured, and open-source RAG solution: https://aka.ms/ragchat

How can you be sure that the RAG chat app answers are accurate, clear, and well formatted? Evaluation! In this session, we'll show you how to generate synthetic data and run bulk evaluations on your RAG app, using the azure-ai-evaluation SDK. Learn about GPT metrics like groundedness and fluency, and custom metrics like citation matching. Plus, discover how you can run evaluations on CI/CD, to easily verify that new changes don't introduce quality regressions.

This session is a part of a series. To learn more, click here

RAGChat: Evaluating RAG answer quality
Evaluating your RAG Chat App 2024-09-12 · 20:00

RAG (Retrieval Augmented Generation) is the most popular approach used to get LLMs to answer user questions grounded in a domain. How can you be sure that the answers are accurate, clear, and well formatted? Evaluation! In this session, we'll show you how to use Azure AI Studio and the Promptflow SDK to generate synthetic data and run bulk evaluations on your RAG app. Learn about different GPT metrics like groundedness and fluency, and consider other ways you can measure the quality of your RAG app answers.

Presented by Nitya Narasimhan, AI Advocate, and Pamela Fox, Python Advocate

** Part of RAGHack, a free global hackathon to develop RAG applications. Join at https://aka.ms/raghack **

**📌 Check out the RAGHack 2024 series here! **

Pre-requisites: - Read the official rules and join the hack at https://aka.ms/raghack. No Purchase Necessary. Must be 18+ to enter. Contest ends 9/16/24.

Evaluating your RAG Chat App
Evaluating your RAG Chat App 2024-09-12 · 20:00

RAG (Retrieval Augmented Generation) is the most popular approach used to get LLMs to answer user questions grounded in a domain. How can you be sure that the answers are accurate, clear, and well formatted? Evaluation! In this session, we'll show you how to use Azure AI Studio and the Promptflow SDK to generate synthetic data and run bulk evaluations on your RAG app. Learn about different GPT metrics like groundedness and fluency, and consider other ways you can measure the quality of your RAG app answers.

Presented by Nitya Narasimhan, AI Advocate, and Pamela Fox, Python Advocate

** Part of RAGHack, a free global hackathon to develop RAG applications. Join at https://aka.ms/raghack **

**📌 Check out the RAGHack 2024 series here! **

Pre-requisites: - Read the official rules and join the hack at https://aka.ms/raghack. No Purchase Necessary. Must be 18+ to enter. Contest ends 9/16/24.

Evaluating your RAG Chat App
Showing 3 results