We'll explore how the AI hype cycle has created a tendency to reach for complex solutions where simpler ones would work better. Drawing from real-world deployment experiences, this talk examines the common practice of deploying AI agents and LLMs when a pseudorandom number generator or basic rule engine would be more appropriate and maintainable. We'll consider how this "AI-first" approach often leads to unnecessary complexity and systems that are harder to debug and maintain. The difference between experienced engineers and newcomers isn't knowing how to build sophisticated AI systems—it's knowing when not to build them. We'll look at practical approaches to technology selection, where complexity is driven by actual requirements rather than trends. The talk will demonstrate how to identify the right point on the spectrum from simple randomization to advanced neural networks, examining the trade-offs between over-engineering and under-engineering. These concepts will be illustrated through a basic insurance system analysis, showing how different architectural decisions performed in practice and what we learned from both successful and failed approaches.
talk-data.com
Topic
llms
102
tagged
Activity Trend
Top Events
The latest version of OpenSearch introduces powerful new features and tools that enable us to build AI-powered assistants capable of interacting not only with our data, but also with cluster configurations and even the external world. We’ll explore how these capabilities open the door to intelligent agent-based systems that can reason, retrieve, and act. We'll walk through how to combine OpenSearch with modern AI tools, like LLMs, agents, and orchestration frameworks, to create assistants that can autonomously diagnose issues, generate insights, or even automate operational tasks.
Introductory session covering mental models of LLMs, prompt engineering techniques, and evaluation of LLMs. Modules include How LLMs are trained; Prompt Engineering; Evaluating LLMs.
In this talk, we will examine how LLM outputs are evaluated by potential end users versus professional linguist-annotators, as two ways of ensuring alignment with real-world user needs and expectations. We will compare the two approaches, highlight the advantages and recurring pitfalls of user-driven annotation, and share the mitigation techniques we have developed from our own experience.
How can we influence quality during the prompt creation stage, as well as how to work with already-generated text—improving it, identifying errors, and filtering out undesirable results. We'll explore linguistic approaches that help achieve better, more controlled outcomes from LLMs.
This session introduces Dana, a local-first agent programming language designed for building AI agents. Get a working expert agent in minutes:
- Long running, multi-step agent workflows on a single line:
step1 | step2 | [step3a, step3b, step3c] | step4 - Built-in concurrency for parallel LLM calls with zero async keywords
- Deterministic execution with learning loops that improve reliability over time
Whether you're dealing with sensitive data, air-gapped requirements, or cloud API limitations—come see what agent development looks like when everything runs locally.
This session introduces Dana, a local-first agent programming language designed for building AI agents. Get a working expert agent in minutes: long running, multi-step agent workflows on a single line (step1 | step2 | [step3a, step3b, step3c] | step4), built-in concurrency for parallel LLM calls with zero async keywords, and deterministic execution with learning loops that improve reliability over time. Whether you're dealing with sensitive data, air-gapped requirements, or cloud API limitations—come see what agent development looks like when everything runs locally.
When working with Large Language Models (LLMs), how do we ensure a probabilistic blob of text is something our code can actually use? In this talk, we explore how Pydantic emerged at a perfect moment exactly for this task; bridging Python's flexibility with the structured data needs of modern AI applications. We will introduce Pydantic and then demonstrate practical applications of it; from prompt engineering and parsing responses, to example of robust function calling and tool chaining via APIs.
Connecting Fabric data with LLM-based agents
Franciszek Górski is a PhD student at the Doctoral School of the Gdansk University of Technology, conducting research on the development of systems combining expert knowledge and natural language processing capabilities of large language models. Since 2021, he has been involved in various research projects at the Multimedia Systems Department, resulting in publications in respected scientific journals. He will present the results of his research paper entitled Integrating Expert Knowledge into Logical Programs via LLMs, which introduces ExKLoP, a framework designed to evaluate the ability of LLMs to integrate expert knowledge into logical reasoning systems, while assessing their potential for self-correction.
Curious how to apply resource-intensive generative AI models across massive datasets without breaking the bank? Join this session to discover efficient batch inference strategies for foundation models on Databricks. Learn how to build scalable, cost-effective pipelines that power LLMs and other generative AI systems—optimized for performance, quality, and throughput. We’ll also dive into ai_query, a powerful new capability that lets you run generative AI directly on your data using SQL-like syntax. See how it simplifies development, unlocks new use cases, and accelerates insights with live demos and real-world examples.
In this session, we’ll show how to turn SurrealDB into a long-term memory layer for your LLM apps, combining graph and vector data to power richer context, better decisions. We’ll walk through practical patterns and show how SurrealDB collapses graph, vector, and relational data into a single memory substrate for next-gen AI.
Learn how to turn SurrealDB into a long-term memory layer for your LLM apps by combining graph data and vector embeddings to power richer context and better decisions. Store persistent memories with graph-linked facts; perform similarity search and structured reasoning in one query; use vector embeddings and graph hops inside SurrealDB. This session walks through practical patterns and demonstrates how SurrealDB collapses graph, vector, and relational data into a single memory substrate for next-gen AI.
Main focus of the talk will be on ensuring coherent behavior of the NPC using (pretty much any) LLM. Technology agnostic, this talk presents general problematic and a simple solution.
Info session about LLM Mini Bootcamp; join to ask questions and receive a discount coupon.
Info session about LLM Mini Bootcamp.
Comment étendre les fonctionnalités d'une plateforme composable pour créer un AI Lakehouse, supporter des applications d'IA batch et temps réel en production, gérer les LLMs tout en assurant la gouvernance et la sécurité des données. Au travers d'exemples concrets comme la conception d'un système de recommandation style TikTok, Lex Avstreikh débat d'une vision d'avenir et de la façon dont les plateformes data doivent évoluer pour répondre aux exigences croissantes de l'ère AI.
Moderator: Larry Swanson. Panelists: Peter Haase (metaphacts), Harald Sack (FIZ Karlsruhe & KIT Karlsruhe), Jennifer Lechner (d-fine), André Teege (Piterion), Alexander Garcia (Siemens Energy). Topics include: Can LLMs actually “understand” symbols, or are they just statistically impressive? How does symbolic reasoning enhance comprehension and trust? Should neuro-symbolic AI be the gold standard for safety and regulation? Is interpretability more important than raw performance? Whose knowledge do symbolic systems represent—and what are the implications?
Unlock the power of AI agents—even if you’re just starting out. In this hands-on, beginner-friendly workshop, you'll go from understanding how Large Language Models (LLMs) work to building a real AI agent using Python, LangChain, and LangGraph. Live Demo: Your First AI Agent — follow along as we build an AI agent that retrieves, reasons, and responds using LangChain and LangGraph.
Lightning talk on boosting developer productivity with large language models.