talk-data.com talk-data.com

Company

Contextual AI

Speakers

6

Activities

7

Speakers from Contextual AI

Talks & appearances

7 activities from Contextual AI speakers

As AI adoption accelerates, many enterprises still face challenges building production-grade AI systems for high-value, knowledge-intensive use cases. RAG 2.0 is Contextual AI’s unique approach for solving mission-critical AI use cases, where accuracy requirements are high and there is a low tolerance for error. 

In this talk, Douwe Kiela—CEO of Contextual AI and co-inventor of RAG—will share lessons learned from deploying enterprise AI systems at scale. He will shed light on how RAG 2.0 differs from classic RAG, the common pitfalls and limitations while moving into production, and why AI practitioners would benefit from focusing less on individual model components and more on the systems-level perspective. You will also learn how Google Cloud’s flexible, reliable, and performant AI infrastructure enabled Contextual AI to build and operate their end-to-end platform.

Move your generative AI projects from proof of concept to production. In this interactive session, you’ll learn how to automate key AI lifecycle processes—evaluation, serving, and RAG—to accelerate your real-world impact. Get hands-on advice from innovative startups and gain practical strategies for streamlining workflows and boosting performance.

Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, Douwe Kiela, the CEO and Co-Founder of Contextual AI and an Adjunct Professor in Symbolic Systems at Stanford University, will talk about how Contextual AI is building the next generation of language models leveraging Google Cloud. He will dive deeper into why retrieval augmented generation (RAG; which he pioneered at Facebook) is the dominant paradigm for large language model (LLM) deployments and the role he believes RAG will play in the future of gen AI.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.