At VodafoneZiggo, we're building digital LLM tools that provide instant information, automate repetitive tasks, and will ultimately serve as a digital buddy. This talk explores how our projects enhance efficiency and transform fieldwork, paving the way for a more effective and informed technical workforce.
talk-data.com
Topic
LLM
Large Language Models (LLM)
1405
tagged
Activity Trend
Top Events
Sinds de stormachtige opkomst van generatieve AI bij het verschijnen van de eerste versie van ChatGPT in 2022, zien we ook bij mediabedrijven een grote drang naar het inpassen van AI in hun organisatie. Welke stappen heeft DPG Media genomen? Hoe verandert dat de werkwijze en de journalistiek? Welke tools hebben ze gebouwd op basis van AI die journalisten helpen in hun werk of lezers helpen met vindbaarheid van artikelen of het genereren van samenvattingen?
Join us as we explore ABN AMRO's journey to optimize the customer chatbot, Anna, enhancing client interactions and service delivery. We focus on analysing conversational data, particularly where outcomes are unclear, using advancements in large language models. Our goal is to extract insights that improve Anna's performance. By employing semi-supervised and few-shot learning techniques, we fine-tuned our OpenAI model and uncovered valuable insights. This presentation will showcase our methodologies and findings, offering potential benefits for technical teams within and beyond our organization, and propelling future innovations.
Panel discussion with experts exploring the real-world use of LLM agents, sharing lessons on reliability, debugging, and architectures beyond hype.
Workshop voor ondernemers en leidinggevenden die AI willen inzetten voor strategie en besluitvorming. Leer hoe je tools als Claude, ChatGPT, Grok, Perplexity en NotebookLM gebruikt voor betere keuzes, marktanalyse en positionering. Ontdek hoe je AI structureel toepast binnen je organisatie en versterk je leiderschap met slimme, datagedreven inzichten. Praktisch, actueel en direct toepasbaar.
How do organizations move from predictive ML to impactful Generative AI? This session presents a strategic blueprint for this transition. It showcases how Google leverages Gemini and AI Agents to automate complex engineering workflows, achieving an 80% reduction in time spent on issue resolution. Gain a framework for fostering innovation, enabling teams, and driving measurable results with LLMs.
Brought to You By: • Statsig — The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig. • Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself. — The Pragmatic Engineer Podcast is back with the Fall 2025 season. Expect new episodes to be published on most Wednesdays, looking ahead. Code Complete is one of the most enduring books on software engineering. Steve McConnell wrote the 900-page handbook just five years into his career, capturing what he wished he’d known when starting out. Decades later, the lessons remain relevant, and Code Complete remains a best-seller. In this episode, we talk about what has aged well, what needed updating in the second edition, and the broader career principles Steve has developed along the way. From his “career pyramid” model to his critique of “lily pad hopping,” and why periods of working in fast-paced, all-in environments can be so rewarding, the emphasis throughout is on taking ownership of your career and making deliberate choices. We also discuss: • Top-down vs. bottom-up design and why most engineers default to one approach • Why rewriting code multiple times makes it better • How taking a year off to write Code Complete crystallized key lessons • The 3 areas software designers need to understand, and why focusing only on technology may be the most limiting • And much more! Steve rarely gives interviews, so I hope you enjoy this conversation, which we recorded in Seattle. — Timestamps (00:00) Intro (01:31) How and why Steve wrote Code Complete (08:08) What code construction is and how it differs from software development (11:12) Top-down vs. bottom-up design approach (14:46) Why design documents frustrate some engineers (16:50) The case for rewriting everything three times (20:15) Steve’s career before and after Code Complete (27:47) Steve’s career advice (44:38) Three areas software designers need to understand (48:07) Advice when becoming a manager, as a developer (53:02) The importance of managing your energy (57:07) Early Microsoft and why startups are a culture of intense focus (1:04:14) What changed in the second edition of Code Complete (1:10:50) AI’s impact on software development: Steve’s take (1:17:45) Code reviews and GenAI (1:19:58) Why engineers are becoming more full-stack (1:21:40) Could AI be the exception to “no silver bullets?” (1:26:31) Steve’s advice for engineers on building a meaningful career — The Pragmatic Engineer deepdives relevant for this episode: • What changed in 50 years of computing • The past and future of modern backend practices • The Philosophy of Software Design – with John Ousterhout • AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django) • TDD, AI agents and coding – with Kent Beck — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
In de razendsnel veranderende wereld van data en informatie zijn technologieën zoals LLM (Large Language Models) de sleutel tot 'praten met onze data'. Ontdek de transformatie bij Expert van traditionele rapportages naar Expert-Ise, de AI-Agent die actief helpt bij besluitvorming.
De Nederlandse industrie kampt al jaren met een toenemend personeelstekort. Veel technici en onderhoudsmonteurs bereiken binnen enkele jaren hun pensioenleeftijd, terwijl er onvoldoende instroom is van jongere technici om hen te vervangen. Sleuteltechnologieën voor digitalisering, zoals generatieve AI, large language models (LLMs) en digital twins, bieden een oplossing in de vorm van de virtuele servicemonteur. Dit systeem combineert real-time data met storingsanalyses en documentatie om monteurs te ondersteunen bij het op afstand diagnosticeren en oplossen van problemen.
Unlock efficient AI! This playbook explores Small Language Models (SLMs) as a cost-effective alternative to large LLMs. Learn to select, deploy (local/cloud), and utilize these powerful, often open-source models for high-value, targeted tasks.
LLMs seem like a hot solution now, until you try deploying one. In this episode, Andriy Burkov, machine learning expert and author of The Hundred-Page Machine Learning Book, joins us for a grounded, sometimes blunt conversation about why many LLM applications fail. We talk about sentiment analysis, difficulty with taxonomy, agents getting tripped up on formatting, and why MCP might not solve your problems. If you're tired of the hype and want to understand the real state of applied LLMs, this episode delivers. What You'll Learn: What is often misunderstood about LLMs The reliability of sentiment analysis How can we make agents more resilient? 📚 Check out Andriy's books on machine learning and LLMs: The Hundred-Page Machine Learning Book The Hundred-Page Language Models Book: hands-on with Pytorch 🤝 Follow Andriy on LinkedIn! Register for free to be part of the next live session: https://bit.ly/3XB3A8b Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter
How to use LLMs to categorize hundreds of thousands of products into 1,000 categories at scale? Learn about our journey from manual/rule-based methods, via fine-tuned semantic models, to a robust multi-step process which uses embeddings and LLMs via the OpenAI APIs. This talk offers data scientists and AI practitioners learnings and best practices for putting such a complex LLM-based system into production. This includes prompt development, balancing cost vs. accuracy via model selection, testing mult-case vs. single-case prompts, and saving costs by using the OpenAI Batch API and a smart early-stopping approach. We also describe our automation and monitoring in a PySpark environment.
With a focus on healthcare applications where accuracy is non negotiable, this talk highlights challenges and delivers practical insights on building AI agents which query complex biological and scientific data to answer sophisticated questions. Drawing from our experience developing Owkin-K Navigator, a free-to-use AI co-pilot for biological research, I'll share hard-won lessons about combining natural language processing with SQL querying and vector database retrieval to navigate large biomedical knowledge sources, addressing challenges of preventing hallucinations and ensuring proper source attribution. This session is ideal for data scientists, ML engineers, and anyone interested in applying python and LLM ecosystem to the healthcare domain.
Evaluating large language models (LLMs) in real-world applications goes far beyond standard benchmarks. When LLMs are embedded in complex pipelines, choosing the right models, prompts, and parameters becomes an ongoing challenge.
In this talk, we will present a practical, human-in-the-loop evaluation framework that enables systematic improvement of LLM-powered systems based on expert feedback. By combining domain expert insights and automated evaluation methods, it is possible to iteratively refine these systems while building transparency and trust.
This talk will be valuable for anyone who wants to ensure their LLM applications can handle real-world complexity - not just perform well on generic benchmarks.
Using AI agents and automation, PyCon DE & PyData volunteers have transformed chaos into streamlined conference ops. From YAML files to LLM-powered assistants, they automate speaker logistics, FAQs, video processing, and more while keeping humans focused on creativity. This case study reveals practical lessons on making AI work in real-world scenarios: structured workflows, validation, and clear context beat hype. Live demos and open-source tools included.
Using LiteLLM in a Real-World RAG System: What Worked and What Didn’t
LiteLLM provides a unified interface to work with multiple LLM providers—but how well does it hold up in practice? In this talk, I’ll share how we used LiteLLM in a production system to simplify model access and handle token budgets. I’ll outline the benefits, the hidden trade-offs, and the situations where the abstraction helped—or got in the way. This is a practical, developer-focused session on integrating LiteLLM into real workflows, including lessons learned and limitations. If you’re considering LiteLLM, this talk offers a grounded look at using it beyond simple prototypes.
AI agents are having a moment, but most of them are little more than fragile prototypes that break under pressure. Together, we’ll explore why so many agentic systems fail in practice, and how to fix that with real engineering principles. In this talk, you’ll learn how to build agents that are modular, observable, and ready for production. If you’re tired of LLM demos that don’t deliver, this talk is your blueprint for building agents that actually work.
In this tutorial, you will play several games that can be used to teach machine learning concepts. Each game can be played in big and small groups. Some involve hands- on material such as cards, some others involve electronic app. All games contain one or more concepts from Machine Learning.
As an outcome, you will take away multiple ideas that make complex topics more understandable – and enjoyable. By doing so, we would like to demonstrate that Machine Learning does not require computers, but the core ideas can be exemplified in a clear and memorable way without. We also would like to demonstrate that gamification is not limited to online quiz questions, but offers ways for learners to bond.
We will bring a set of carefully selected games that have been proven in a big classroom setting and contain useful abstractions of linear models, decision trees, LLMs and several other Machine Learning concepts. We also believe that it is probably fun to participate in this tutorial.
Small Language Models (SLMs) offer an efficient and cost-effective alternative to LLMs—especially when latency, privacy, inference costs or deployment constraints matter. However, training them typically requires large labeled datasets and is time-consuming, even if it isn't your first rodeo.
This talk presents an end-to-end approach for curating high-quality synthetic data using LLMs to train domain-specific SLMs. Using a real-world use case, we’ll demonstrate how to reduce manual labeling time, cut costs, and maintain performance—making SLMs viable for production applications.
Whether you are a seasoned Machine Learning Engineer or a person just getting starting with building AI features, you will come away with the inspiration to build more performant, secure and environmentally-friendly AI systems.
Summary In this episode of the Data Engineering Podcast Serge Gershkovich, head of product at SQL DBM, talks about the socio-technical aspects of data modeling. Serge shares his background in data modeling and highlights its importance as a collaborative process between business stakeholders and data teams. He debunks common misconceptions that data modeling is optional or secondary, emphasizing its crucial role in ensuring alignment between business requirements and data structures. The conversation covers challenges in complex environments, the impact of technical decisions on data strategy, and the evolving role of AI in data management. Serge stresses the need for business stakeholders' involvement in data initiatives and a systematic approach to data modeling, warning against relying solely on technical expertise without considering business alignment.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Enterprises today face an enormous challenge: they’re investing billions into Snowflake and Databricks, but without strong foundations, those investments risk becoming fragmented, expensive, and hard to govern. And that’s especially evident in large, complex enterprise data environments. That’s why companies like DirecTV and Pfizer rely on SqlDBM. Data modeling may be one of the most traditional practices in IT, but it remains the backbone of enterprise data strategy. In today’s cloud era, that backbone needs a modern approach built natively for the cloud, with direct connections to the very platforms driving your business forward. Without strong modeling, data management becomes chaotic, analytics lose trust, and AI initiatives fail to scale. SqlDBM ensures enterprises don’t just move to the cloud—they maximize their ROI by creating governed, scalable, and business-aligned data environments. If global enterprises are using SqlDBM to tackle the biggest challenges in data management, analytics, and AI, isn’t it worth exploring what it can do for yours? Visit dataengineeringpodcast.com/sqldbm to learn more.Your host is Tobias Macey and today I'm interviewing Serge Gershkovich about how and why data modeling is a sociotechnical endeavorInterview IntroductionHow did you get involved in the area of data management?Can you start by describing the activities that you think of when someone says the term "data modeling"?What are the main groupings of incomplete or inaccurate definitions that you typically encounter in conversation on the topic?How do those conceptions of the problem lead to challenges and bottlenecks in execution?Data modeling is often associated with data warehouse design, but it also extends to source systems and unstructured/semi-structured assets. How does the inclusion of other data localities help in the overall success of a data/domain modeling effort?Another aspect of data modeling that often consumes a substantial amount of debate is which pattern to adhere to (star/snowflake, data vault, one big table, anchor modeling, etc.). What are some of the ways that you have found effective to remove that as a stumbling block when first developing an organizational domain representation?While the overall purpose of data modeling is to provide a digital representation of the business processes, there are inevitable technical decisions to be made. What are the most significant ways that the underlying technical systems can help or hinder the goals of building a digital twin of the business?What impact (positive and negative) are you seeing from the introduction of LLMs into the workflow of data modeling?How does tool use (e.g. MCP connection to warehouse/lakehouse) help when developing the transformation logic for achieving a given domain representation? What are the most interesting, innovative, or unexpected ways that you have seen organizations address the data modeling lifecycle?What are the most interesting, unexpected, or challenging lessons that you have learned while working with organizations implementing a data modeling effort?What are the overall trends in the ecosystem that you are monitoring related to data modeling practices?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links sqlDBMSAPJoe ReisERD == Entity Relation DiagramMaster Data ManagementdbtData ContractsData Modeling With Snowflake book by Serge (affiliate link)Type 2 DimensionData VaultStar SchemaAnchor ModelingRalph KimballBill InmonSixth Normal FormMCP == Model Context ProtocolThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA