talk-data.com talk-data.com

Event

PyData London 2025

2025-06-06 – 2025-06-08 PyData

Activities tracked

12

Filtering by: LLM ×

Sessions & talks

Showing 1–12 of 12 · Newest first

Search within this event →
Scaling AI workloads with Ray & Airflow

Scaling AI workloads with Ray & Airflow

2025-06-08 Watch
talk

Ray is an open-source framework for scaling Python applications, particularly machine learning and AI workloads. It provides the layer for parallel processing and distributed computing. Many large language models (LLMs), including OpenAI's GPT models, are trained using Ray.

On the other hand, Apache Airflow is a consolidated data orchestration framework downloaded more than 20 million times monthly.

This talk presents the Airflow Ray provider package that allows users to interact with Ray from an Airflow workflow. In this talk, I'll show how to use the package to create Ray clusters and how Airflow can trigger Ray pipelines in those clusters.

Making LLMs reliable: A practical framework for production

Making LLMs reliable: A practical framework for production

2025-06-08 Watch
talk
LLM

LLM outputs are non-deterministic, making it difficult to ensure reliability in production, especially in high-risk applications. In this talk, we’ll walk through a structured approach to making LLMs production-ready. We’ll cover setting up tests during experimentation, implementing real-time guardrails before responses reach users, and monitoring live performance for critical issues. Finally, we’ll discuss post-deployment log analysis to drive continuous improvements and build trust with stakeholders.

LLM Inference Arithmetics: the Theory behind Model Serving

2025-06-07
talk

Have you ever asked yourself how parameters for an LLM are counted, or wondered why Gemma 2B is actually closer to a 3B model? You have no clue about what a KV-Cache is? (And, before you ask: no, it's not a Redis fork.) Do you want to find out how much GPU VRAM you need to run your model smoothly?

If your answer to any of these questions was "yes", or you have another doubt about inference with LLMs - such as batching, or time-to-first-token - this talk is for you. Well, except for the Redis part.

Successful Projects through a bit of Rebellion

Successful Projects through a bit of Rebellion

2025-06-07 Watch
talk

This talk is for leaders who want new techniques to improve their success rates. In the last 15 months I've built a private data science peer mentorship group where we discuss rebellious ideas that improve our ability to make meaningful change in organisations of all sizes.

As a leader you've no doubt had trouble defining new projects (perhaps you've been asked - "add ChatGPT!"), getting buy-in, building support, defining defensible metrics and milestones, hiring, developing your team, dealing with conflict, avoiding overload and ultimately delivering valuable projects that are adopted by the business. I'll share advice across all of these areas based on 25 years of personal experience and the topics we've discussed in my leadership community.

You'll walk away with new ideas, perspectives and references that ought to change how to work with your team and organisation.

Not Another LLM Talk… Practical Lessons from Building a Real-World Adverse Media Pipeline

Not Another LLM Talk… Practical Lessons from Building a Real-World Adverse Media Pipeline

2025-06-07 Watch
talk

LLMs are magical—until they aren’t. Extracting adverse media entities might sound straightforward, but throw in hallucinations, inconsistent outputs, and skyrocketing API costs, and suddenly, that sleek prototype turns into a production nightmare.

Our adverse media pipeline monitors over 1 million articles a day, sifting through vast amounts of news to identify reports of crimes linked to financial bad actors, money laundering, and other risks. Thanks to GenAI and LLMs, we can tackle this problem in new ways—but deploying these models at scale comes with its own set of challenges: ensuring accuracy, controlling costs, and staying compliant in highly regulated industries.

In this talk, we’ll take you inside our journey to production, exploring the real-world challenges we faced through the lens of key personas: Cautious Claire, the compliance officer who doesn’t trust black-box AI; Magic Mike, the sales lead who thinks LLMs can do anything; Just-Fine-Tune Jenny, the PM convinced fine-tuning will solve everything; Reinventing Ryan, the engineer reinventing the wheel; and Paranoid Pete, the security lead fearing data leaks.

Expect practical insights, cautionary tales, and real-world lessons on making LLMs reliable, scalable, and production-ready. If you've ever wondered why your pipeline works perfectly in a Jupyter notebook but falls apart in production, this talk is for you.

Tackling Data Challenges for Scaling Multi-Agent GenAI Apps with Python

Tackling Data Challenges for Scaling Multi-Agent GenAI Apps with Python

2025-06-07 Watch
talk

The use of multiple Large Language Models (LLMs) working together perform complex tasks, known as multi-agent systems, has gained significant traction. While orchestration frameworks like LangGraph and Semantic Kernel can streamline orchestration and coordination among agents, developing large-scale, production-grade systems can bring a host of data challenges. Issues such as supporting multi-tenancy, preserving transactional integrity and state, and managing reliable asynchronous function calls while scaling efficiently can be difficult to navigate.

Leveraging insights from practical experiences in the Azure Cosmos DB engineering team, this talk will guide you through key considerations and best practices for storing, managing, and leveraging data in multi-agent applications at any scale. You’ll learn how to understand core multi-agent concepts and architectures, manage statefulness and conversation histories, personalize agents through retrieval-augmented generation (RAG), and effectively integrate APIs and function calls.

Aimed at developers, architects, and data scientists at all skill levels, this session will show you how to take your multi-agent systems from the lab to full-scale production deployments, ready to solve real-world problems. We’ll also walk through code implementations that can be quickly and easily put into practice, all in Python.

Keynote- From Next Token Prediction to Reasoning and Beyond

Keynote- From Next Token Prediction to Reasoning and Beyond

2025-06-07 Watch
talk
Jay Alammar (Cohere)
LLM

Large Language Models (LLMs) have grown into prominence as some of the most popular technological artifacts of the day. This talk will provide a highly accessible and visual overview of LLM concepts relevant to today's data professionals. This includes looking at present-day Transformer architectures, tokenizers, reward models, reasoning LLMs, agentic trajectories, and the various training stages of a large language model including next-word prediction, instruction-tuning, preference-tuning, and reinforcement learning.

Bringing stories to life with AI, data streaming and generative agents

2025-06-07
talk

Explore how AI-powered Generative Agents can evolve in real time using live data streams. Inspired by Stanford's 'Generative Agents' paper, this session dives into building dynamic, AI-driven worlds with Apache Kafka, Flink, and Iceberg - plus LLMs, RAG, and Python. Demos and practical examples included!

Enhancing Fraud Detection with LLM-Generated Profiles: From Analyst Efficiency to Model Performance

2025-06-07
talk

This talk explores how leveraging Large Language Models (LLMs) to generate structured customer profile summaries improved both compliance analyst workflows and fraud scoring models at a financial institution. Attendees will learn how embeddings derived from LLM-generated narratives outperformed traditional manual feature engineering and raw text embeddings, offering insights into practical applications of NLP in fraud detection.

AI agents testing: How to evaluate the unpredictable

AI agents testing: How to evaluate the unpredictable

2025-06-07 Watch
talk

AI agents and multi-step workflows are powerful, but testing them can be tricky. This talk explores practical ways to test these complex systems — like running multi-step simulations, checking tool calls, and using LLMs for evaluation. You'll also learn how to prioritize what to test and set up session-level evaluations with open-source tools.

Sovereign Data for AI with Python

Sovereign Data for AI with Python

2025-06-07 Watch
talk

The only certainty in life is that the pendulum will always swing. Recently, the pendulum has been swinging towards repatriation. However, the infrastructure needed to build and operate AI systems using Python in a sovereign (even air-gapped) environment has changed since the shift towards the cloud. This talk will introduce the infrastructure you need to build and deploy Python applications for AI - from data processing, to model training and LLM fine-tuning at scale to inference at scale. We will focus on open-source infrastructure including: a Python library server (Pypi, Conda, etc) and avoiding supply chain attacks a container registry that works at scale a S3 storage layer a database server with a vector index

Forecasting Weather using Time Series ML

Forecasting Weather using Time Series ML

2025-06-06 Watch
talk

This hands-on workshop covers how to use open source ML models like LSTMs and TimeSeries LLM's, with Python to try to forecast weather patterns, with best practices for data preparation and real time predictions.