talk-data.com
People (12 results)
See all 12 →Activities & events
| Title & Speakers | Event |
|---|---|
|
#300 End to End AI Application Development with Maxime Labonne, Head of Post-training at Liquid AI & Paul-Emil Iusztin, Founder at Decoding ML
2025-05-05 · 10:00
Maxime Labonne
– Senior Staff Machine Learning Scientist, Head of Post-training
@ Liquid AI
,
Richie
– host
@ DataCamp
,
Paul-Emil Iusztin
– Founder
@ Decoding ML
The roles within AI engineering are as diverse as the challenges they tackle. From integrating models into larger systems to ensuring data quality, the day-to-day work of AI professionals is anything but routine. How do you navigate the complexities of deploying AI applications? What are the key steps from prototype to production? For those looking to refine their processes, understanding the full lifecycle of AI development is essential. Let's delve into the intricacies of AI engineering and the strategies that lead to successful implementation. Maxime Labonne is a Senior Staff Machine Learning Scientist at Liquid AI, serving as the head of post-training. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is recognized as a Google Developer Expert in AI/ML. An active blogger, he has made significant contributions to the open-source community, including the LLM Course on GitHub, tools such as LLM AutoEval, and several state-of-the-art models like NeuralBeagle and Phixtral. He is the author of the best-selling book “Hands-On Graph Neural Networks Using Python,” published by Packt. Paul-Emil Iusztin designs and implements modular, scalable, and production-ready ML systems for startups worldwide. He has extensive experience putting AI and generative AI into production. Previously, Paul was a Senior Machine Learning Engineer at Metaphysic.ai and a Machine Learning Lead at Core.ai. He is a co-author of The LLM Engineer's Handbook, a best seller in the GenAI space. In the episode, Richie, Maxime, and Paul explore misconceptions in AI application development, the intricacies of fine-tuning versus few-shot prompting, the limitations of current frameworks, the roles of AI engineers, the importance of planning and evaluation, the challenges of deployment, and the future of AI integration, and much more. Links Mentioned in the Show: Maxime’s LLM Course on HuggingFaceMaxime and Paul’s Code Alongs on DataCampDecoding ML on SubstackConnect with Maxime and PaulSkill Track: AI FundamentalsRelated Episode: Building Multi-Modal AI Applications with Russ d'Sa, CEO & Co-founder of LiveKitRewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business |
DataFramed |
|
LLM Engineer's Handbook
2024-10-22
Paul Iusztin
– author
,
Maxime Labonne
– author
The "LLM Engineer's Handbook" is your comprehensive guide to mastering Large Language Models from concept to deployment. Written by leading experts, it combines theoretical foundations with practical examples to help you build, refine, and deploy LLM-powered solutions that solve real-world problems effectively and efficiently. What this Book will help me do Understand the principles and approaches for training and fine-tuning Large Language Models (LLMs). Apply MLOps practices to design, deploy, and monitor your LLM applications effectively. Implement advanced techniques such as retrieval-augmented generation (RAG) and preference alignment. Optimize inference for high performance, addressing low-latency and high availability for production systems. Develop robust data pipelines and scalable architectures for building modular LLM systems. Author(s) Paul Iusztin and Maxime Labonne are experienced AI professionals specializing in natural language processing and machine learning. With years of industry and academic experience, they are dedicated to making complex AI concepts accessible and actionable. Their collaborative authorship ensures a blend of theoretical rigor and practical insights tailored for modern AI practitioners. Who is it for? This book is tailored for AI engineers, NLP professionals, and LLM practitioners who wish to deepen their understanding of Large Language Models. Ideal readers possess some familiarity with Python, AWS, and general AI concepts. If you aim to apply LLMs to real-world scenarios or enhance your expertise in AI-driven systems, this handbook is designed for you. |
O'Reilly Data Engineering Books
|
|
AI Meetup (April): GenAI, LLMs in Production
2024-04-16 · 17:00
** Important RSVP HERE (Due to limited room capacity, you must pre-register at the link for admission). Welcome to the AI meetup in London. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers. Agenda: * 6:00pm\~7:00pm: Checkin and Networking * 7:00pm\~9:00pm: Tech talks and Q&A * 9:00pm: Open discussion and Mixer Tech Talk: Improving the Usefulness of LLMs with RAG Speaker: Andreas Eriksen (Vespa) Abstract: LLMs like GPT can give useful answers to many questions, but there are also well-known issues with their output: The responses may be outdated, inaccurate, or outright hallucinations, and it’s hard to know when you can trust them. And they don’t know anything about you or your organization private data (we hope). RAG can help reduce the problems with “hallucinated” answers, and make the responses more up-to-date, accurate, and personalized - by injecting related knowledge, including non-public data. In this talk, we’ll go through what RAG means, demo some ways you can implement it - and warn of some traps you still have to watch out for. Tech Talk: Efficiently Fine-Tuning LLMs Speaker: Maxime Labonne (ML Scientist) Abstract: Fine-tuning LLMs is a fundamental technique for companies to customize models for their specific needs. In this talk, we will cover when fine-tuning is appropriate, popular libraries for efficient fine-tuning, and key techniques. We will explore both supervised fine-tuning (LoRA, QLoRA) and preference alignment (PPO, DPO, KTO) methods. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics Sponsors: We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 10,000+ AI developers in London or 350K+ worldwide. Community on Slack/Discord
|
AI Meetup (April): GenAI, LLMs in Production
|