talk-data.com
Event
RAGtime - All About Retrieval-Augmented Generation
Activities tracked
0
Please join Charlottesville Data Science for RAGtime, an event all about one of the hottest topics in applied AI — retrieval-augmented generation (RAG)! We'll be gathering in person at the Center for Open Science, just off the Downtown Mall. This event will be a double-feature:
- Tyler Hutcherson, senior applied AI engineer at Redis, will present the talk RAGs to Riches: Extracting Gems from Your Data with LLMs
- Scott Stults, co-founder and relevance engineer at OpenSource Connections, will present the talk Measuring and Improving the R in RAG
How to find us
The Center for Open Science is located in the Downtown Business Center, which is connected to the Omni Hotel on the Downtown Mall. If you're driving, you can park in the Omni Hotel parking garage and the Center for Open Science will validate parking. Please proceed to the lobby and follow the signs to the Center for Open Science office in Suite 500. If you’re facing the front desk, proceed past the front desk to the left, pass the ballrooms on your right, and proceed through to double doors to the business center. If you have any issues, feel free to ask for directions to the Center for Open Science at the front desk in the lobby.
About RAGs to Riches: Extracting Gems from Your Data with LLMs
LLMs are making a huge surge in the ecosystem, but how can we put them to work with private, domain-specific data? The goal of this session is to cut through the noise/hype of Generative AI and learn a core technique — Retrieval Augmented Generation (RAG). RAG is used to combine insights from data with LLMs in order to automate search and question-answering through the vehicle of conversational AI.
Whether it’s building a customer support chatbot or an internal knowledge discovery tool, RAG is a common technique employed to ground LLMs in factual data (mitigating hallucinations). This session will explore the technical underpinnings of information retrieval, foundational models, production RAG system architecture, and common scaling challenges.
About Measuring and Improving the R in RAG
The quality of text generated by a RAG system is limited by the quality of the text chunks it uses for context. The textbook way to measure the quality of these chunks is for a human expert to give each pair a judgement. Of course, this is expensive, time-consuming, and impossible in practice for any sizable corpus.
An attractive alternative is to use an LLM to learn from the experts and generate these judgements as-needed. From there we can tune our retrieval and ultimately improve the quality of our generated responses.
This talk will cover how OpenSource Connections is currently doing this for a client, as well as what our plans are for future development.
Sessions & talks
Showing 1–0 of 0 · Newest first
No individual activities are attached to this event yet.