talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

27

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Big Data LDN 2025 ×

Businesses spend countless hours wrangling data: extracting information from messy PDFs, building dashboards that nobody uses, and attempting to extract insights that simply don’t exist. Surely there’s a better way?

In this session, Vishal Soni and Owen Coyle will show how AI and Alteryx can work together to completely transform how you handle data. Starting with one of the toughest challenges: extracting structured information from unstructured PDFs. Instead of complex regex, manual OCR, or hours of cleanup, you’ll see how LLMs inside Alteryx can instantly convert complex documents into clean, tabular data that’s ready for analysis.

Once this data is processed: Alteryx Auto Insights can be leveraged, which produces AI-powered analysis of your data, and jumps straight to the “why” behind the numbers. You’ll quickly see how Auto Insights surfaces the most important trends, patterns, anomalies and actionable insights. All this, while generating personalized, presentation-ready reports to drive action.

Whether you’re new to Alteryx or already an experienced user, you’ll leave this session with a clear understanding of how AI is changing analytics – turning hours of manual work into instant, actional insight – and how Alteryx is that the forefront of this change.

AI innovation shouldn’t be gated by pipeline delays or data migrations. This session shows how federated data products deliver instant, trusted access—fueling chatbots, agents, and multi-agent workflows that solve real business problems.

We’ll walk through examples of a semantic layer built with data products that power both BI and AI. You’ll see how data products ensure more accurate AI results, simplify governance, and support experimentation with any LLM or agent framework.

Real-world use cases will include:

* A 1-day chatbot business project for answering questions with governed data

* An autonomous agent driving decisions from live sources

* A multi-agent workflow delivering dynamic, real-time insights

Leave with a practical blueprint to accelerate AI—no warehouse rewrites, no delays, just results.

This talk explores the disconnect between MLOps fundamental principles and their practical application in designing, operating and maintaining machine learning pipelines. We’ll break down these principles, examine their influence on pipeline architecture, and conclude with a straightforward, vendor-agnostic mind-map, offering a roadmap to build resilient MLOps systems for any project or technology stack. Despite the surge in tools and platforms, many teams still struggle with the same underlying issues: brittle data dependencies, poor observability, unclear ownership, and pipelines that silently break once deployed. Architecture alone isn't the answer; systems thinking is.

Topics covered include:

- Modular design: feature, training, inference

- Built-in observability, versioning, reuse

- Orchestration across batch, real-time, LLMs

- Platform-agnostic patterns that scale

Most enterprise AI initiatives don’t fail because of bad models. They fail because of bad data. As organizations rush to integrate LLMs and advanced analytics into production, they often hit a roadblock: datasets that are messy, constantly evolving, and nearly impossible to manage at scale.

This session reveals why data is the Achilles’ heel of enterprise AI and how data version control can turn that weakness into a strength. You’ll learn how data version control transforms the way teams manage training datasets, track ML experiments, and ensure reproducibility across complex, distributed systems.

We’ll cover the fundamentals of data versioning, its role in modern enterprise AI architecture, and real-world examples of teams using it to build scalable, trustworthy AI systems. 

Whether you’re an ML engineer, data architect, or AI leader, this talk will help you identify critical data challenges before they stall your roadmap, and provide you with a proven framework to overcome them.

Large Language Models (LLMs) are transformative, but static knowledge and hallucinations limit their direct enterprise use. Retrieval-Augmented Generation (RAG) is the standard solution, yet moving from prototype to production is fraught with challenges in data quality, scalability, and evaluation.

This talk argues the future of intelligent retrieval lies not in better models, but in a unified, data-first platform. We'll demonstrate how the Databricks Data Intelligence Platform, built on a Lakehouse architecture with integrated tools like Mosaic AI Vector Search, provides the foundation for production-grade RAG.

Looking ahead, we'll explore the evolution beyond standard RAG to advanced architectures like GraphRAG, which enable deeper reasoning within Compound AI Systems. Finally, we'll show how the end-to-end Mosaic AI Agent Framework provides the tools to build, govern, and evaluate the intelligent agents of the future, capable of reasoning across the entire enterprise.