talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (8 results)

See all 8 →

Activities & events

Title & Speakers Event
Chip Huyen 2025-10-28 · 13:00
Chip Huyen – computer scientist @ Independent
Michelle Yi 2025-10-28 · 13:00
Michelle Yi – Co-Founder @ Generationship
Sergio Paniego Blanco 2025-10-28 · 13:00
Sergio Paniego Blanco – Machine Learning Engineer @ Hugging Face
Avinash Balachandran 2025-10-28 · 13:00
Avinash Balachandran – VP; Adjunct Lecturer @ Toyota Research Institute; Stanford University
Sinan Ozdemir 2025-10-28 · 13:00
Sinan Ozdemir – AI & LLM Expert; Author; Founder & CTO @ LoopGenius
João (Joe) Moura 2025-10-28 · 13:00
João (Joe) Moura – CEO @ CrewAI
Amy Hodler 2025-10-28 · 13:00
Amy Hodler – Founder | Consultant | Graph Evangelist @ GraphGeeks.org
Laurie Voss 2025-10-28 · 13:00
Laurie Voss – VP, Developer Relations @ LlamaIndex
Joscha Bach 2025-10-28 · 13:00
Joscha Bach – Cognitive Science & Artificial Intelligence Strategist; Executive Director @ Liquid AI; CIMC
Steven Pousty, PhD 2025-10-28 · 13:00
Steven Pousty, PhD – Principal and Founder @ Tech Raven Consulting
Seth Weidman 2025-10-28 · 13:00
Seth Weidman – Principal Product Manager @ SentiLink
Richmond Alake 2025-10-28 · 13:00
Richmond Alake – Developer Advocate @ MongoDB
Harrison Chase 2025-10-28 · 13:00
Harrison Chase – CEO and Co-founder @ LangChain
Suman Debnath 2025-10-28 · 13:00
Suman Debnath – Principal AI/ML Advocate @ Amazon Web Services
Shikhar Kwatra 2025-10-28 · 13:00
Shikhar Kwatra – AI Architect @ OpenAI
Gergely Orosz – host , Chip Huyen – computer scientist @ Stanford University

Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Graphite — The AI developer productivity platform.  • Vanta — Automate compliance and simplify security with Vanta. — On today’s episode of The Pragmatic Engineer, I’m joined by Chip Huyen, a computer scientist, author of the freshly published O’Reilly book AI Engineering, and an expert in applied machine learning. Chip has worked as a researcher at Netflix, was a core developer at NVIDIA (building NeMo, NVIDIA’s GenAI framework), and co-founded Claypot AI. She also taught Machine Learning at Stanford University. In this conversation, we dive into the evolving field of AI Engineering and explore key insights from Chip’s book, including: • How AI Engineering differs from Machine Learning Engineering  • Why fine-tuning is usually not a tactic you’ll want (or need) to use • The spectrum of solutions to customer support problems – some not even involving AI! • The challenges of LLM evals (evaluations) • Why project-based learning is valuable—but even better when paired with structured learning • Exciting potential use cases for AI in education and entertainment • And more! — Timestamps (00:00) Intro  (01:31) A quick overview of AI Engineering (05:00) How Chip ensured her book stays current amidst the rapid advancements in AI (09:50) A definition of AI Engineering and how it differs from Machine Learning Engineering  (16:30) Simple first steps in building AI applications (22:53) An explanation of BM25 (retrieval system)  (23:43) The problems associated with fine-tuning  (27:55) Simple customer support solutions for rolling out AI thoughtfully  (33:44) Chip’s thoughts on staying focused on the problem  (35:19) The challenge in evaluating AI systems (38:18) Use cases in evaluating AI  (41:24) The importance of prioritizing users’ needs and experience  (46:24) Common mistakes made with Gen AI (52:12) A case for systematic problem solving  (53:13) Project-based learning vs. structured learning (58:32) Why AI is not the end of engineering (1:03:11) How AI is helping education and the future use cases we might see (1:07:13) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Applied AI Software Engineering: RAG https://newsletter.pragmaticengineer.com/p/rag  • How do AI software engineering agents work? https://newsletter.pragmaticengineer.com/p/ai-coding-agents  • AI Tooling for Software Engineers in 2024: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024  • IDEs with GenAI features that Software Engineers love https://newsletter.pragmaticengineer.com/p/ide-that-software-engineers-love — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

AI/ML GenAI LLM Marketing RAG Cyber Security
The Pragmatic Engineer
Chip Huyen – computer scientist @ Stanford University , Joe Reis – founder @ Ternary Data

Chip Huyen joins me to chat about AI Engineering, AI Agents, and much more.

AI/ML
The Joe Reis Show
AI Engineering 2024-12-04
Chip Huyen – author

Recent breakthroughs in AI have not only increased demand for AI products, they've also lowered the barriers to entry for those who want to build AI products. The model-as-a-service approach has transformed AI from an esoteric discipline into a powerful development tool that anyone can use. Everyone, including those with minimal or no prior AI experience, can now leverage AI models to build applications. In this book, author Chip Huyen discusses AI engineering: the process of building applications with readily available foundation models. The book starts with an overview of AI engineering, explaining how it differs from traditional ML engineering and discussing the new AI stack. The more AI is used, the more opportunities there are for catastrophic failures, and therefore, the more important evaluation becomes. This book discusses different approaches to evaluating open-ended models, including the rapidly growing AI-as-a-judge approach. AI application developers will discover how to navigate the AI landscape, including models, datasets, evaluation benchmarks, and the seemingly infinite number of use cases and application patterns. You'll learn a framework for developing an AI application, starting with simple techniques and progressing toward more sophisticated methods, and discover how to efficiently deploy these applications. Understand what AI engineering is and how it differs from traditional machine learning engineering Learn the process for developing an AI application, the challenges at each step, and approaches to address them Explore various model adaptation techniques, including prompt engineering, RAG, fine-tuning, agents, and dataset engineering, and understand how and why they work Examine the bottlenecks for latency and cost when serving foundation models and learn how to overcome them Choose the right model, dataset, evaluation benchmarks, and metrics for your needs Chip Huyen works to accelerate data analytics on GPUs at Voltron Data. Previously, she was with Snorkel AI and NVIDIA, founded an AI infrastructure startup, and taught Machine Learning Systems Design at Stanford. She's the author of the book Designing Machine Learning Systems, an Amazon bestseller in AI. AI Engineering builds upon and is complementary to Designing Machine Learning Systems (O'Reilly).

data ai-ml artificial-intelligence-ai artificial intelligence (ai) AI/ML Analytics Data Analytics RAG
O'Reilly Data Engineering Books

**RSVP instructions: register on event website to receive joining link. (RSVP on meetup will NOT have joining link)

Description: Chip Huyen is a writer and computer scientist currently at Voltron Data, working on GPU-native data processing and open data standards (Ibis, Apache Arrow, Substrait). Previously, she built machine learning tools at NVIDIA, Snorkel AI, and Netflix. ​ In this fireside chat, Chip joins Hugo Bowne-Anderson to explore the unique challenges and opportunities in productionizing foundation models compared to traditional machine learning approaches. As AI systems become increasingly advanced and open-ended, ML engineers must adapt their strategies and techniques to ensure reliable, efficient, and scalable deployments. ​Key topics of discussion will include:

  • ​From closed-ended to open-ended evaluation: Developing robust evaluation methodologies for foundation models, which can generate novel outputs and exhibit emergent behaviors;
  • ​From feature engineering to context construction: Techniques for effectively prompting and guiding foundation models to perform desired tasks across diverse domains;
  • Adapting to unstructured data: Strategies for processing and integrating the vast amounts of unstructured data required to train and operate foundation models;
  • Infrastructure and tooling challenges: Scaling compute resources, optimizing workflows, and building reliable pipelines for foundation model deployment;
  • The evolving role of the ML engineer: New skills, collaborations, and best practices for succeeding in the era of foundation models and AI-driven products.

​Join us for a conversation on the forefront of AI engineering, and discover strategies for navigating the shift to foundation models in your organization.

Sponsors: We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at our meetups, receive prominent recognition as sponsors, but also gain exposure to our extensive membership base of 350K+ AI developers worldwide.

From ML Engineering to AI Engineering Foundation Models with Chip Huyen

**RSVP instructions: register on event website to receive joining link. (RSVP on meetup will NOT have joining link)

Description: Chip Huyen is a writer and computer scientist currently at Voltron Data, working on GPU-native data processing and open data standards (Ibis, Apache Arrow, Substrait). Previously, she built machine learning tools at NVIDIA, Snorkel AI, and Netflix. ​ In this fireside chat, Chip joins Hugo Bowne-Anderson to explore the unique challenges and opportunities in productionizing foundation models compared to traditional machine learning approaches. As AI systems become increasingly advanced and open-ended, ML engineers must adapt their strategies and techniques to ensure reliable, efficient, and scalable deployments. ​Key topics of discussion will include:

  • ​From closed-ended to open-ended evaluation: Developing robust evaluation methodologies for foundation models, which can generate novel outputs and exhibit emergent behaviors;
  • ​From feature engineering to context construction: Techniques for effectively prompting and guiding foundation models to perform desired tasks across diverse domains;
  • Adapting to unstructured data: Strategies for processing and integrating the vast amounts of unstructured data required to train and operate foundation models;
  • Infrastructure and tooling challenges: Scaling compute resources, optimizing workflows, and building reliable pipelines for foundation model deployment;
  • The evolving role of the ML engineer: New skills, collaborations, and best practices for succeeding in the era of foundation models and AI-driven products.

​Join us for a conversation on the forefront of AI engineering, and discover strategies for navigating the shift to foundation models in your organization.

Sponsors: We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at our meetups, receive prominent recognition as sponsors, but also gain exposure to our extensive membership base of 350K+ AI developers worldwide.

From ML Engineering to AI Engineering: Navigating the Shift to Foundation Models