talk-data.com
Activities & events
| Title & Speakers | Event |
|---|---|
|
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
2026-01-15 · 16:45
For our first meetup of 2026, we're bringing you two deeply technical stories from the front lines of applied AI, together with AI Native Netherlands. We'll hear how the ANWB navigates the challenges of imperfect data in a legacy organization, and then dive into a practical guide for building production-grade AI agentic workflows with Elastic. We’ll cover:
Speakers 1: Yke Rusticus & David Brummer (ANWB) Yke is a data engineer at ANWB with a background in astronomy and artificial intelligence. In the industry, he learned that AI models and algorithms often do not get past the experimentation phase, leading him to specialise in MLOps to bridge the gap between experimentation and production. As a professional in this field, Yke has developed ML platforms and use cases across different cloud providers, and is passionate about sharing his knowledge through tutorials and trainings. David is a self-acclaimed “not your typical Data Scientist” who loves analogue photography, vegan food, dogs, and holds an unofficial PhD in thrifting and sourcing second-hand pearls. With a background in growth hacking and experience in the digital marketing trenches of a startup, a scale-up, and a digital agency, he now brings together lean startup thinking, marketing know-how, and sales pitches, blending it all with a passion for creativity and tech at the ANWB. As a bridge between business and data, David focuses on building AI solutions that don’t just work, but actually get used. Talk: How AI is helping you back on the road We learn at school what AI can do when the data is perfect. We learn at conferences what AI can do when the environment is perfect. In this talk, you'll learn what AI can do when neither is perfect. This story is about the process of overcoming these challenges in an organisation that has been around since the invention of the bike. We'll balance the technical aspect of these solutions with the human aspect throughout the talk. Because in the end, it's not actually AI helping you back on the road, it's people. Speaker 2: Hans Heerooms (Elastic) Hans Heerooms is a Senior Solutions Architect at Elastic. He has worked in various roles, but always with one objective: helping organisations to get the most out of their data with the least amount of effort. His current role at Elastic is all about supporting Elastic’s customers to help them evolve from data driven decisions to AI guided workflows. Talk: Building Production-Grade AI Agentic Workflows with Elastic This talk tells and shows how Elastic Agent Builder can help to build and implement agentic workflows. It addresses the complexity of traditional development by integrating all necessary components—LLM orchestration, vector database, tracing, and security—directly into the Elasticsearch Search AI Platform. This talk will show you how to build custom agents, declare and assign tools, and start conversations with your data. Agenda: 17:45 — Arrival, food & drinks 18:30 — Talk #1 \| Yke & David (ANWB) 19:15 — Short break 19:30 — Talk #2 \| Hans Heerooms (Elastic) 20:15 — Open conversation, networking & more drinks 21:00 — Wrapping up Please note that the main door will close at 18.00. You will still be able to enter our office, but we might ask you to wait a little bit while we come down to open the door for you. What to bring: Just curiosity and questions. If you're working on MLOps, applied AI, or building agentic workflows, we’d love to hear your thoughts. Who this is for: Data scientists, AI/ML engineers, data engineers, MLOps specialists, SREs, architects, and engineering leaders focused on building and using real-world AI solutions. Where to find us: Elastic's office in Amsterdam Keizersgracht 281, 1016 ED Amsterdam |
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
|
|
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
2026-01-15 · 16:45
Hi everyone, Many of you asked for more practical, real-world AI use-cases, and we listened! For our first meetup of 2026, we're bringing you two deeply technical stories from the front lines of applied AI. We'll hear how the ANWB navigates the challenges of imperfect data in a legacy organization, and then dive into a practical guide for building production-grade AI agentic workflows with Elastic. A huge thank you to our friends at Elastic for hosting us at their Amsterdam office. Food and drinks will be provided! We’ll cover:
Speakers 1: Yke Rusticus & David Brummer (ANWB) Yke is a data engineer at ANWB with a background in astronomy and artificial intelligence. In the industry, he learned that AI models and algorithms often do not get past the experimentation phase, leading him to specialise in MLOps to bridge the gap between experimentation and production. As a professional in this field, Yke has developed ML platforms and use cases across different cloud providers, and is passionate about sharing his knowledge through tutorials and trainings. David is a self-acclaimed “not your typical Data Scientist” who loves analogue photography, vegan food, dogs, and holds an unofficial PhD in thrifting and sourcing second-hand pearls. With a background in growth hacking and experience in the digital marketing trenches of a startup, a scale-up, and a digital agency, he now brings together lean startup thinking, marketing know-how, and sales pitches, blending it all with a passion for creativity and tech at the ANWB. As a bridge between business and data, David focuses on building AI solutions that don’t just work, but actually get used. Talk: How AI is helping you back on the road We learn at school what AI can do when the data is perfect. We learn at conferences what AI can do when the environment is perfect. In this talk, you'll learn what AI can do when neither is perfect. This story is about the process of overcoming these challenges in an organisation that has been around since the invention of the bike. We'll balance the technical aspect of these solutions with the human aspect throughout the talk. Because in the end, it's not actually AI helping you back on the road, it's people. Speaker 2: Hans Heerooms (Elastic) Hans Heerooms is a Senior Solutions Architect at Elastic. He has worked in various roles, but always with one objective: helping organisations to get the most out of their data with the least amount of effort. His current role at Elastic is all about supporting Elastic’s customers to help them evolve from data driven decisions to AI guided workflows. Talk: Building Production-Grade AI Agentic Workflows with Elastic This talk tells and shows how Elastic Agent Builder can help to build and implement agentic workflows. It addresses the complexity of traditional development by integrating all necessary components—LLM orchestration, vector database, tracing, and security—directly into the Elasticsearch Search AI Platform. This talk will show you how to build custom agents, declare and assign tools, and start conversations with your data. Agenda: 17:45 — Arrival, food & drinks 18:30 — Talk #1 \| Yke & David (ANWB) 19:15 — Short break 19:30 — Talk #2 \| Hans Heerooms (Elastic) 20:15 — Open conversation, networking & more drinks 21:00 — Wrapping up Please note that the main door will close at 18.00. You will still be able to enter our office, but we might ask you to wait a little bit while we come down to open the door for you. What to bring: Just curiosity and questions. If you're working on MLOps, applied AI, or building agentic workflows, we’d love to hear your thoughts. Who this is for: Data scientists, AI/ML engineers, data engineers, MLOps specialists, SREs, architects, and engineering leaders focused on building and using real-world AI solutions. Where to find us: Elastic Amsterdam Keizersgracht 281, 1016 ED Amsterdam |
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
|
|
Hands-On LLM Engineering with Python (Part 1)
2025-12-18 · 18:00
REGISTER BELOW FOR MORE AVAILABLE DATES! ↓↓↓↓↓ https://luma.com/stelios ----------------------------------------------------------------------------------- Who is this for? Students, developers, and anyone interested in using Large Language Models (LLMs) to build real software solutions with ** Python. Tired of vibe coding with AI tools? Want to actually understand and own your code, instead of relying on black-box magic? This session shows you how to build LLM systems properly, with full control and clear engineering principles. Who is leading the session? The session is led by Dr. Stelios Sotiriadis, CEO of Warestack, Associate Professor and MSc Programme Director at Birkbeck, University of London, specialising in cloud computing, distributed systems, and AI engineering. Stelios holds a PhD from the University of Derby, completed a postdoctoral fellowship at the University of Toronto, and has worked on industry and research projects with Huawei, IBM, Autodesk, and multiple startups. Since moving to London in 2018, he has been teaching at Birkbeck. In 2021, he founded Warestack, building software for startups around the world. What we’ll cover? A hands-on introduction to building software with LLMs using Python, Ollama, and LiteLLM, including:
This session focuses on theory, fundamentals and real code you can re-use. Why LiteLLM? LiteLLM gives you low-level control to build custom LLM solutions your own way, without a heavy framework like LangChain, so you understand how everything works and design your own architecture. A dedicated LangChain session will follow for those who want to go further. What are the requirements? Bring a laptop with Python installed (Windows, macOS, or Linux), along with Visual Studio Code or a similar IDE, with at least 10GB of free disk space and 8GB of RAM.
What is the format? A 3-hour live session with:
This is a highly practical, hands-on class focused on code and building working LLM systems. What are the prerequisites? A good understanding of programming with Python is required (basic to intermediate level). I assume you are already comfortable writing Python scripts. What comes after? Participants will receive an optional mini capstone project with one-to-one personalised feedback. Is it just one session? This is the first session in a new sequence on applied AI, covering agents, RAG systems, vector databases, and production-ready LLM workflows. Later sessions will dive deeper into topics such as embeddings with deep neural networks, LangChain, advanced retrieval, and multi-agent architectures.
How many participants? To keep this interactive, only 15 spots are available. Please register as soon as possible. |
Hands-On LLM Engineering with Python (Part 1)
|
|
n8n: From Fundamentals to Building Intelligent Automation Pipeline
2025-11-26 · 11:00
Mastering nodes, integrations, debugging, and deployment through a real-world AI content automation project — Moein Foroughi In this hands-on workshop, we’ll go beyond drag-and-drop demos — exploring how n8n really works under the hood, how to connect it with external APIs and databases, debug complex flows, and manage production-grade deployments. Along the way, we’ll build a real, functional project: an AI-assisted article writer that integrates search, scraping, and structured data storage. This project serves as a practical example to apply every key concept of n8n — from node logic and credentials management to error handling, scaling, and containerisation. We’ll cover the following steps:
By the end, you’ll know how to build and run powerful n8n workflows that connect seamlessly with modern AI tools, data systems and tools. Level: Beginner to Intermediate — basic familiarity with Docker, Linux, and modern AI tools will help you get the most out of the session, but all key concepts will be introduced from first principles. About the speaker: Moein Foroughi is a DevOps engineer focused on automation and scalable systems, with a professional interest in applying AI and modern technologies to improve engineering workflows and operational efficiency. |
n8n: From Fundamentals to Building Intelligent Automation Pipeline
|
|
Building Enterprise AI That Works
2025-10-15 · 23:00
Topic: Building the Next Generation of Enterprise AI: From Intelligent Automation to Document Search with RAG Description: The promise of AI is here, but how do we move from hype to tangible business value? Organizations today are drowning in unstructured data and slowed by complex manual workflows. The next generation of enterprise AI offers a powerful solution, capable of not just automating tasks but understanding, reasoning, and interacting with information in unprecedented ways. Join Bibin Prathap, a Microsoft MVP for AI and a seasoned AI & Analytics Leader, for a deep dive into the practical architecture and application of modern enterprise AI. Drawing from his hands-on experience building an AI-driven workflow automation platform and a generative AI document explorer, Bibin will demystify the core technologies transforming the modern enterprise. This session will provide a technical roadmap for building impactful, scalable, and intelligent systems. What You Will Learn:
Who Should Attend: This session is designed for AI Engineers, Data Scientists, Software Architects, Developers, and Tech Leaders who are responsible for implementing AI solutions and driving digital transformation. Speak with Our Knowledgeable Advisor Access Our Complimentary Career Guide Transform Your Career with Us in Just 14 Weeks Discover More About WeCloudData ABOUT US WeCloudData is the leading accredited education institute in North America that focuses on Data Science, Data Engineering, DevOps, Artificial Intelligence, and Business Intelligence. Developed by industry experts, and hiring managers, and highly recognized by our hiring partners, WeCloudData’s learning paths have helped many students make successful transitions into data and DevOps roles that fit their backgrounds and passions. WeCloudData provides a different and more practical teaching methodology, so that students not only learn the technical skills but also acquire the soft skills that will make them stand out in a work environment. WeCloudData has also partnered with many big companies to help them adopt the latest tech in Data, AI, and DevOps. Visit our website for more information: https://weclouddata.com |
Building Enterprise AI That Works
|
|
Google’s engineering culture
2025-10-15 · 20:32
Gergely Orosz
– host
,
Elin Nilsson
– tech industry researcher
@ The Pragmatic Engineer
Brought to You By: • Statsig — The unified platform for flags, analytics, experiments, and more. Something interesting is happening with the latest generation of tech giants. Rather than building advanced experimentation tools themselves, companies like Anthropic, Figma, Notion and a bunch of others… are just using Statsig. Statsig has rebuilt this entire suite of data tools that was available at maybe 10 or 15 giants until now. Check out Statsig. • Linear – The system for modern product development. Linear is just so fast to use – and it enables velocity in product workflows. Companies like Perplexity and OpenAI have already switched over, because simplicity scales. Go ahead and check out Linear and see why it feels like a breeze to use. — What is it really like to be an engineer at Google? In this special deep dive episode, we unpack how engineering at Google actually works. We spent months researching the engineering culture of the search giant, and talked with 20+ current and former Googlers to bring you this deepdive with Elin Nilsson, tech industry researcher for The Pragmatic Engineer and a former Google intern. Google has always been an engineering-driven organization. We talk about its custom stack and tools, the design-doc culture, and the performance and promotion systems that define career growth. We also explore the culture that feels built for engineers: generous perks, a surprisingly light on-call setup often considered the best in the industry, and a deep focus on solving technical problems at scale. If you are thinking about applying to Google or are curious about how the company’s engineering culture has evolved, this episode takes a clear look at what it was like to work at Google in the past versus today, and who is a good fit for today’s Google. Jump to interesting parts: (13:50) Tech stack (1:05:08) Performance reviews (GRAD) (2:07:03) The culture of continuously rewriting things — Timestamps (00:00) Intro (01:44) Stats about Google (11:41) The shared culture across Google (13:50) Tech stack (34:33) Internal developer tools and monorepo (43:17) The downsides of having so many internal tools at Google (45:29) Perks (55:37) Engineering roles (1:02:32) Levels at Google (1:05:08) Performance reviews (GRAD) (1:13:05) Readability (1:16:18) Promotions (1:25:46) Design docs (1:32:30) OKRs (1:44:43) Googlers, Nooglers, ReGooglers (1:57:27) Google Cloud (2:03:49) Internal transfers (2:07:03) Rewrites (2:10:19) Open source (2:14:57) Culture shift (2:31:10) Making the most of Google, as an engineer (2:39:25) Landing a job at Google — The Pragmatic Engineer deepdives relevant for this episode: • Inside Google’s engineering culture • Oncall at Google • Performance calibrations at tech companies • Promotions and tooling at Google • How Kubernetes is built • The man behind the Big Tech comics: Google cartoonist Manu Cornet — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe |
The Pragmatic Engineer |
|
Building Agentic AI for Semantic Search
2025-09-18 · 16:00
As part of the Future of Data and AI: Agentic AI Conference, join us for an immersive, hands-on workshop that guides you through building Agentic RAG AI applications using Pinecone and AWS — no prior machine learning background required! You’ll:
Whether you’re just starting out or enhancing your expertise, this workshop will equip you with the skills to create powerful, production-ready Agentic RAG systems. 📌 Registration is required. Register now to secure your spot. |
Building Agentic AI for Semantic Search
|
|
Building Agentic AI for Semantic Search
2025-09-18 · 16:00
As part of the Future of Data and AI: Agentic AI Conference, join us for an immersive, hands-on workshop that guides you through building Agentic RAG AI applications using Pinecone and AWS — no prior machine learning background required! You’ll:
Whether you’re just starting out or enhancing your expertise, this workshop will equip you with the skills to create powerful, production-ready Agentic RAG systems. 📌 Registration is required. Register now to secure your spot. |
Building Agentic AI for Semantic Search
|
|
Building Agentic AI for Semantic Search
2025-09-18 · 16:00
As part of the Future of Data and AI: Agentic AI Conference, join us for an immersive, hands-on workshop that guides you through building Agentic RAG AI applications using Pinecone and AWS — no prior machine learning background required! You’ll:
Whether you’re just starting out or enhancing your expertise, this workshop will equip you with the skills to create powerful, production-ready Agentic RAG systems. 📌 Registration is required. Register now to secure your spot. |
Building Agentic AI for Semantic Search
|
|
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
2025-09-09 · 18:00
This hands-on workshop teaches participants to build cost-effective evaluation systems for RAG applications using Azure Cosmos DB's vector search capabilities. Attendees will learn to implement semantic caching techniques that significantly reduce LLM evaluation costs while maintaining fast query performance. Participants will create a complete evaluation pipeline that measures retrieval quality, answer accuracy, and system performance using industry-standard metrics. By the end of this session, attendees will have production-ready code and benchmarking tools that can scale across different deployment environments. This session is a part of a series. To learn more, click here |
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
|
|
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
2025-09-09 · 18:00
This hands-on workshop teaches participants to build cost-effective evaluation systems for RAG applications using Azure Cosmos DB's vector search capabilities. Attendees will learn to implement semantic caching techniques that significantly reduce LLM evaluation costs while maintaining fast query performance. Participants will create a complete evaluation pipeline that measures retrieval quality, answer accuracy, and system performance using industry-standard metrics. By the end of this session, attendees will have production-ready code and benchmarking tools that can scale across different deployment environments. This session is a part of a series. To learn more, click here |
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
|
|
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
2025-09-09 · 18:00
This hands-on workshop teaches participants to build cost-effective evaluation systems for RAG applications using Azure Cosmos DB's vector search capabilities. Attendees will learn to implement semantic caching techniques that significantly reduce LLM evaluation costs while maintaining fast query performance. Participants will create a complete evaluation pipeline that measures retrieval quality, answer accuracy, and system performance using industry-standard metrics. By the end of this session, attendees will have production-ready code and benchmarking tools that can scale across different deployment environments. This session is a part of a series. To learn more, click here |
Building Scalable LLM Evaluation Pipelines with Azure Cosmos DB
|
|
The Definitive Guide to OpenSearch
2025-09-02
Learn how to harness the power of OpenSearch effectively with 'The Definitive Guide to OpenSearch'. This book explores installation, configuration, query building, and visualization, guiding readers through practical use cases and real-world implementations. Whether you're building search experiences or analyzing data patterns, this guide equips you thoroughly. What this Book will help me do Understand core OpenSearch principles, architecture, and the mechanics of its search and analytics capabilities. Learn how to perform data ingestion, execute advanced queries, and produce insightful visualizations on OpenSearch Dashboards. Implement scaling strategies and optimum configurations for high-performance OpenSearch clusters. Explore real-world case studies that demonstrate OpenSearch applications in diverse industries. Gain hands-on experience through practical exercises and tutorials for mastering OpenSearch functionality. Author(s) Jon Handler, Soujanya Konka, and Prashant Agrawal, celebrated experts in search technologies and big data analysis, bring their years of experience at AWS and other domains to this book. Their collective expertise ensures that readers receive both core theoretical knowledge and practical applications to implement directly. Who is it for? This book is aimed at developers, data professionals, engineers, and systems operators who work with search systems or analytics platforms. It is especially suitable for individuals in roles handling large-scale data, who want to improve their skills or deploy OpenSearch in production environments. Early learners and seasoned experts alike will find valuable insights. |
O'Reilly Data Engineering Books
|
|
From Bits to Tables: The Evolution of S3 Storage
2025-08-05 · 00:02
Andy Warfield
– guest
@ Amazon
,
Tobias Macey
– host
Summary In this episode of the Data Engineering Podcast Andy Warfield talks about the innovative functionalities of S3 Tables and Vectors and their integration into modern data stacks. Andy shares his journey through the tech industry and his role at Amazon, where he collaborates to enhance storage capabilities, discussing the evolution of S3 from a simple storage solution to a sophisticated system supporting advanced data types like tables and vectors crucial for analytics and AI-driven applications. He explains the motivations behind introducing S3 Tables and Vectors, highlighting their role in simplifying data management and enhancing performance for complex workloads, and shares insights into the technical challenges and design considerations involved in developing these features. The conversation explores potential applications of S3 Tables and Vectors in fields like AI, genomics, and media, and discusses future directions for S3's development to further support data-driven innovation. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementTired of data migrations that drag on for months or even years? What if I told you there's a way to cut that timeline by up to 6x while guaranteeing accuracy? Datafold's Migration Agent is the only AI-powered solution that doesn't just translate your code; it validates every single data point to ensure perfect parity between your old and new systems. Whether you're moving from Oracle to Snowflake, migrating stored procedures to dbt, or handling complex multi-system migrations, they deliver production-ready code with a guaranteed timeline and fixed price. Stop burning budget on endless consulting hours. Visit dataengineeringpodcast.com/datafold to book a demo and see how they're turning months-long migration nightmares into week-long success stories.Your host is Tobias Macey and today I'm interviewing Andy Warfield about S3 Tables and VectorsInterview IntroductionHow did you get involved in the area of data management?Can you describe what your goals are with the Tables and Vector features of S3?How did the experience of building S3 Tables inform your work on S3 Vectors?There are numerous implementations of vector storage and search. How do you view the role of S3 in the context of that ecosystem?The most directly analogous implementation that I'm aware of is the Lance table format. How would you compare the implementation and capabilities of Lance with what you are building with S3 Vectors?What opportunity do you see for being able to offer a protocol compatible implementation similar to the Iceberg compatibility that you provide with S3 Tables?Can you describe the technical implementation of the Vectors functionality in S3?What are the sources of inspiration that you looked to in designing the service?Can you describe some of the ways that S3 Vectors might be integrated into a typical AI application?What are the most interesting, innovative, or unexpected ways that you have seen S3 Tables/Vectors used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on S3 Tables/Vectors?When is S3 the wrong choice for Iceberg or Vector implementations?What do you have planned for the future of S3 Tables and Vectors?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links S3 TablesS3 VectorsS3 ExpressParquetIcebergVector IndexVector DatabasepgvectorEmbedding ModelRetrieval Augmented GenerationTwelveLabsAmazon BedrockIceberg REST CatalogLog-Structured Merge TreeS3 MetadataSentence TransformerSparkTrinoDaftThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA |
Data Engineering Podcast |
|
Workshop: Designing, deploying, and evaluating multi-agent systems using Snowflake Cortex
2025-07-30 · 17:05
Josh Reini
– Developer Advocate
@ Snowflake
As enterprise AI adoption accelerates, data agents that can plan, retrieve, reason, and act across structured and unstructured sources are becoming foundational. But building agents that work is no longer enough, you need to build agents you can trust. This 60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search. Then we’ll go beyond functionality and focus on reliability. You’ll learn how to instrument your agent with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically. Using trace-based observability tools like TruLens and Cortex eval APIs, we’ll show how to identify inefficiencies and refine agent behavior iteratively. By the end of this workshop, you’ll: - Build a data agent capable of answering complex queries across multiple data sources - Integrate inline evaluation to guide and assess agent behavior in real time - Debug and optimize execution flows using trace-level observability - Leave with a repeatable framework for deploying trustworthy agentic systems in production |
WEBINAR "Building Reliable Multi-Agent Systems in the Enterprise"
|
|
How Skyscanner Runs Real-Time AI at Scale with Databricks
2025-06-11 · 18:30
Ahmed Bilal
– Staff Product Manager
@ Databricks
,
Michael Ewins
– Director of Engineering
@ Skyscanner
Deploying AI in production is getting more complex — with different model types, tighter timelines, and growing infrastructure demands. In this session, we’ll walk through how Mosaic AI Model Serving helps teams deploy and scale both traditional ML and generative AI models efficiently, with built-in monitoring and governance.We’ll also hear from Skyscanner on how they’ve integrated AI into their products, scaled to 100+ production endpoints, and built the processes and team structures to support AI at scale. Key Takeaways: How Skyscanner ships and operates AI in real-world products How to deploy and scale a variety of models with low latency and minimal overhead Building compound AI systems using models, feature stores, and vector search Monitoring, debugging, and governing production workloads |
Data + AI Summit 2025 |
|
Powering AI Workflows with Couchbase: Semantic Search and AI Agents
2025-04-24 · 17:00
AI development is evolving rapidly, and databases must keep pace. Databases play a critical role in modern AI applications—they must consistently provide fast, accurate search across large datasets. In this talk, we’ll explore how vector search fits into AI development workflows and how Couchbase offers a unique, flexible approach by combining NoSQL with vector capabilities. We’ll cover the fundamentals of vector databases and AI agents, dive into Couchbase’s architecture and its integration with AI frameworks and wrap up with a live demo—showing how to build an AI-powered PDF chatbot using Couchbase, LangChain, and Google Gemini. Whether you’re building intelligent search, recommendation systems, or agentic applications, you’ll walk away with practical insights on using Couchbase to bring AI to production. Speaker: Georgina Martin, LinkedIn Georgina is a Solutions Engineer and has been at Couchbase since 2022, working closely with enterprise and midmarket customers helping them to build modern, scalable applications leveraging Couchbase’s advanced NoSQL solutions. She is passionate about innovative technology and delivering customers the highest quality solutions possible. Outside of work, Georgina enjoys bouldering and travel. Event Agenda: 6:00 to 6:20: Entry 6:20 to 7:20: Talk and QnA 7:20 to 7:50: Networking, Pizza and Drinks Key takeaways:
|
Powering AI Workflows with Couchbase: Semantic Search and AI Agents
|
|
Multi-Agent API with LangGraph and Azure Cosmos DB
2025-04-16 · 19:00
The rise of multi-agent AI applications is transforming how we build intelligent systems - but how do you architect them for real-world scalability and performance? In this session, we’ll take a deep dive into a production-grade multi-agent application built with LangGraph for agent orchestration, FastAPI for an API layer, and Azure Cosmos DB as the backbone for state management, vector storage, and transactional data. Through a detailed code walkthrough, you’ll see how to design and implement an agent-driven workflow that seamlessly integrates retrieval-augmented generation (RAG), memory persistence, and dynamic state transitions. We’ll cover:
By the end of this session, you’ll have a clear blueprint for building and deploying your own scalable, cloud-native multi-agent applications that harness the power of modern AI and cloud infrastructure. Whether you're an AI engineer, cloud architect, or Python developer, this talk will equip you with practical insights and battle-tested patterns to build the next generation of AI-powered applications 📌 Learn more about the series here Pre-requisites: * Join the Hackathon * Learning Resources |
Multi-Agent API with LangGraph and Azure Cosmos DB
|
|
Multi-Agent API with LangGraph and Azure Cosmos DB
2025-04-16 · 19:00
The rise of multi-agent AI applications is transforming how we build intelligent systems - but how do you architect them for real-world scalability and performance? In this session, we’ll take a deep dive into a production-grade multi-agent application built with LangGraph for agent orchestration, FastAPI for an API layer, and Azure Cosmos DB as the backbone for state management, vector storage, and transactional data. Through a detailed code walkthrough, you’ll see how to design and implement an agent-driven workflow that seamlessly integrates retrieval-augmented generation (RAG), memory persistence, and dynamic state transitions. We’ll cover:
By the end of this session, you’ll have a clear blueprint for building and deploying your own scalable, cloud-native multi-agent applications that harness the power of modern AI and cloud infrastructure. Whether you're an AI engineer, cloud architect, or Python developer, this talk will equip you with practical insights and battle-tested patterns to build the next generation of AI-powered applications 📌 Learn more about the series here Pre-requisites: * Join the Hackathon * Learning Resources |
Multi-Agent API with LangGraph and Azure Cosmos DB
|
|
Multi-Agent API with LangGraph and Azure Cosmos DB
2025-04-16 · 19:00
The rise of multi-agent AI applications is transforming how we build intelligent systems - but how do you architect them for real-world scalability and performance? In this session, we’ll take a deep dive into a production-grade multi-agent application built with LangGraph for agent orchestration, FastAPI for an API layer, and Azure Cosmos DB as the backbone for state management, vector storage, and transactional data. Through a detailed code walkthrough, you’ll see how to design and implement an agent-driven workflow that seamlessly integrates retrieval-augmented generation (RAG), memory persistence, and dynamic state transitions. We’ll cover:
By the end of this session, you’ll have a clear blueprint for building and deploying your own scalable, cloud-native multi-agent applications that harness the power of modern AI and cloud infrastructure. Whether you're an AI engineer, cloud architect, or Python developer, this talk will equip you with practical insights and battle-tested patterns to build the next generation of AI-powered applications 📌 Learn more about the series here Pre-requisites: * Join the Hackathon * Learning Resources |
Multi-Agent API with LangGraph and Azure Cosmos DB
|