talk-data.com talk-data.com

Event

Data + AI Summit 2025

2025-06-09 – 2025-06-13 Databricks Summit Visit website ↗

Activities tracked

509

Filtering by: Databricks ×

Sessions & talks

Showing 76–100 of 509 · Newest first

Search within this event →
Sponsored by: Confluent | Turn SAP Data into AI-Powered Insights with Databricks

Sponsored by: Confluent | Turn SAP Data into AI-Powered Insights with Databricks

2025-06-12 Watch
talk
Rodrigo Sanchez Bredee (Confluent) , Sean Falconer (Confluent)

Learn how Confluent simplifies real-time streaming of your SAP data into AI-ready Delta tables on Databricks. In this session, you'll see how Confluent’s fully managed data streaming platform—with unified Apache Kafka® and Apache Flink®—connects data from SAP S/4HANA, ECC, and 120+ other sources to enable easy development of trusted, real-time data products that fuel highly contextualized AI and analytics. With Tableflow, you can represent Kafka topics as Delta tables in just a few clicks—eliminating brittle batch jobs and custom pipelines. You’ll see a product demo showcasing how Confluent unites your SAP and Databricks environments to unlock ERP-fueled AI, all while reducing the total cost of ownership (TCO) for data streaming by up to 60%.

Sponsored by: Datafold | Breaking Free: How Evri is Modernizing SAP HANA Workflows to Databricks with AI and Datafold

Sponsored by: Datafold | Breaking Free: How Evri is Modernizing SAP HANA Workflows to Databricks with AI and Datafold

2025-06-12 Watch
lightning_talk
Gleb Mezhanskiy (Datafold)

With expensive contracts up for renewal, Evri faced the challenge of migrating 1,000 SAP HANA assets and 200+ Talend jobs to Databricks. This talk will cover how we transformed SAP HANA and Talend workflows into modern Databricks pipelines through AI-powered translation and validation -- without months of manual coding. We'll cover:- Techniques for handling SAP HANA's proprietary formats- Approaches for refactoring incremental pipelines while ensuring dashboard stability- The technology enabling automated translation of complex business logic- Validation strategies that guarantee migration accuracye'll share real examples of SAP HANA stored procedures transformed into Databricks code and demonstrate how we maintained 100% uptime of critical dashboards during the transition. Join us to discover how AI is revolutionizing what's possible in enterprise migrations from GUI-based legacy systems to modern, code-first data platforms.

Sponsored by: Dataiku | Agility Meets Governance: How Morgan Stanley Scales ML in a Regulated World

Sponsored by: Dataiku | Agility Meets Governance: How Morgan Stanley Scales ML in a Regulated World

2025-06-12 Watch
talk
Raja Lanka (Morgan Stanley)

In regulated industries like finance, agility can't come at the cost of compliance. Morgan Stanley found the answer in combining Dataiku and Databricks to create a governed, collaborative ecosystem for machine learning and predictive analytics. This session explores how the firm accelerated model development and decision-making, reducing time-to-insight by 50% while maintaining full audit readiness. Learn how no-code workflows empowered business users, while scalable infrastructure powered Terabyte-scale ML. Discover best practices for unified data governance, risk automation, and cross-functional collaboration that unlock innovation without compromising security. Ideal for data leaders and ML practitioners in regulated industries looking to harmonize speed, control, and value.

Sponsored by: Impetus Technologies | Future-Ready Data at Scale: How Shutterfly Modernized for GenAI-Driven Personalization

Sponsored by: Impetus Technologies | Future-Ready Data at Scale: How Shutterfly Modernized for GenAI-Driven Personalization

2025-06-12 Watch
talk
Catalina Toba (Shutterfly) , Chris Raub (Impetus)

As a leading personalized product retailer, Shutterfly needed a modern, secure, and performant data foundation to power GenAI-driven customer experiences. However, their existing stack was creating roadblocks in performance, governance, and machine learning scalability. In partnership with Impetus, Shutterfly embarked on a multi-phase migration to Databricks Unity Catalog. This transformation not only accelerated Shutterfly’s ability to provide AI-driven personalization at scale but also improved governance, reduced operational overhead, and laid a scalable foundation for GenAI innovation. Join experts from Databricks, Impetus, and Shutterfly to discover how this collaboration enabled faster data-driven decision-making, simplified compliance, and unlocked the agility needed to meet evolving customer demands in the GenAI era. Learn from their journey and take away best practices for your own modernization efforts.

Sponsored by: Promethium | Delivering Self-Service Data for AI Scale on Databricks

Sponsored by: Promethium | Delivering Self-Service Data for AI Scale on Databricks

2025-06-12 Watch
lightning_talk
Prat Moghe (Promethium.ai)

AI initiatives often stall when data teams can’t keep up with business demand for ad hoc, self-service data. Whether it’s AI agents, BI tools, or business users—everyone needs data immediately, but the pipeline-centric modern data stack is not built for this scale of agility. Promethium enables the data teams to generate instant, contextual data products called Data Answers based on rapid, exploratory questions from the business. Data Answers empower data teams for AI-scale collaboration with the business. We will demo Promethium’s new agent capability to build data answers on Databricks for self-service data. The Promethium agent leverages and extends Genie with context from other enterprise data and applications to ensure accuracy and relevance.

Sponsored by: Salesforce | From Data to Action: A Unified and Trusted Approach

Sponsored by: Salesforce | From Data to Action: A Unified and Trusted Approach

2025-06-12 Watch
talk
Kuber Sharma (Salesforce)

Empower AI and agents with trusted data and metadata from an end-to-end unified system. Discover how Salesforce Data Cloud, Agentforce, and Databricks work together to fuel automation, AI, and analytics through a unified data strategy—driving real-time intelligence, enabling zero-copy data sharing, and unlocking scalable activation across the enterprise.

Sponsored by: Securiti | Safely Curating Data to Enable Enterprise AI with Databricks

Sponsored by: Securiti | Safely Curating Data to Enable Enterprise AI with Databricks

2025-06-12 Watch
lightning_talk
Jocelyn Houle (Securiti.ai)

This session will explore how developers can easily select, extract, filter, and control data pre-ingestion to accelerate safe AI. Learn how the Securiti and Databricks partnership empowers Databricks users by providing the critical foundation for unlocking scalability and accelerating trustworthy AI development and adoption.Key Takeaways:● Understand how to leverage data intelligence to establish a foundation for frameworks like OWASP top 10 for LLM’s, NIST AI RMF and Gartner’s TRiSM.● Learn how automated data curation and synching address specific risks while accelerating AI development in Databricks.● Discover how leading organizations are able to apply robust access controls across vast swaths of mostly unstructured data● Learn how to maintain data provenance and control as data is moved and transformed through complex pipelines in the Databricks platform.

Techcombank's Multi-Million Dollar Transformation Leveraging Cloud and Databricks

Techcombank's Multi-Million Dollar Transformation Leveraging Cloud and Databricks

2025-06-12 Watch
talk
Santhosh Mahendiran (Techcombank (TCB))

The migration to the Databricks Data Intelligence Platform has enabled Techcombank to more efficiently unify data from over 50 systems, improve governance, streamline daily operational analytics pipelines and use advanced analytics tools and AI to create more meaningful and personalized experiences for customers. With Databricks, Techcombank has also introduced key solutions that are reshaping its digital banking services: AI-driven lead management system: Techcombank's internally developed AI program called 'Lead Allocation Curated Engine' (LACE) optimizes lead management and provides relationship managers with enriched insights for smarter lead allocation to drive business growth. AI-powered program for digital banking inclusion of small businesses: An AI-powered GeoSense assists frontline workers with analytics-driven insights about which small businesses and merchants to engage in the bank's digital ecosystem. And more examples, which will be presented.

Unlocking Cross-Organizational Collaboration to Protect the Environment With Databricks at DEFRA

Unlocking Cross-Organizational Collaboration to Protect the Environment With Databricks at DEFRA

2025-06-12 Watch
talk
Paul Sinclair (Defra)

Join us to learn how the UK's Department for Environment, Food & Rural Affairs (DEFRA) transformed data use with Databricks’ Unity Catalog, enabling nationwide projects through secure, scalable analytics. DEFRA safeguards the UK's natural environment. Historical fragmentation of data, talent and tools across siloed platforms and organizations, made it difficult to fully exploit the department’s rich data. DEFRA launched its Data Analytics & Science Hub (DASH), powered by the Databricks Data Intelligence Platform, to unify its data ecosystem. DASH enables hundreds of users to access and share datasets securely. A flagship example demonstrates its power, using Databricks to process aerial photography and satellite data to identify peatlands in need of restoration — a complex task made possible through unified data governance, scalable compute and AI. Attendees will hear about DEFRA’s journey, learn valuable lessons about building a platform crossing organizational boundaries.

Using Delta-rs and Delta-Kernel-rs to Serve CDC Feeds

Using Delta-rs and Delta-Kernel-rs to Serve CDC Feeds

2025-06-12 Watch
talk
Stephen Carman (Databricks) , Oussama Saoudi (Databricks)

Change data feeds are a common tool for synchronizing changes between tables and performing data processing in a scalable fashion. Serverless architectures offer a compelling solution for organizations looking to avoid the complexity of managing infrastructure. But how can you bring CDFs into a serverless environment? In this session, we'll explore how to integrate Change Data Feeds into serverless architectures using Delta-rs and Delta-kernel-rs—open-source projects that allow you to read Delta tables and their change data feeds in Rust or Python. We’ll demonstrate how to use these tools with Lakestore’s serverless platform to easily stream and process changes. You’ll learn how to: Leverage Delta tables and CDFs in serverless environments Utilize Databricks and Unity Catalog without needing Apache Spark

Pacers Sports and Entertainment and Databricks

Pacers Sports and Entertainment and Databricks

2025-06-12 Watch
talk
Jared Chavez (Pacers Sports and Entertainment) , Rick Schultz (Databricks) , Ari Kaplan (Databricks)

The Pacers Sports Group has had an amazing year. The Indianapolis Pacers in the NBA finals for the first time in 25 years. The Fever are setting attendance and viewership records with WNBA celebrity Caitlin Clark. Hear how they have transformed their data and AI capabilities for marketing, fan behavior insights, season ticket propensity models, and democratization to their non-technical personas. And receiving a 12,000x cost reduction down to just $8 a year switching to Databricks.

Creating a Custom PySpark Stream Reader with PySpark 4.0

Creating a Custom PySpark Stream Reader with PySpark 4.0

2025-06-11 Watch
lightning_talk
Skyler Myers (Entrada)

PySpark supports many data sources out of the box, such as Apache Kafka, JDBC, ODBC, Delta Lake, etc. However, some older systems, such as systems that use JMS protocol, are not supported by default and require considerable extra work for developers to read from them. One such example is ActiveMQ for streaming. Traditionally, users of ActiveMQ have to use a middle-man in order to read the stream with Spark (such as writing to a MySQL DB using Java code and reading that table with Spark JDBC). With PySpark 4.0’s custom data sources (supported in DBR 15.3+) we are able to cut out the middle-man processing using batch or Spark Streaming and consume the queues directly from PySpark, saving developers considerable time and complexity in getting source data into your Delta Lake and governed by Unity Catalog and orchestrated with Databricks Workflows.

Disney's Foundational Medallion: A Journey Into Next-Generation Data Architecture

Disney's Foundational Medallion: A Journey Into Next-Generation Data Architecture

2025-06-11 Watch
lightning_talk

Step into the world of Disney Streaming as we unveil the creation of our Foundational Medallion, a cornerstone in our architecture that redefines how we manage data at scale. In this session, we'll explore how we tackled the multi-faceted challenges of building a consistent, self-service surrogate key architecture — a foundational dataset for every ingested stream powering Disney Streaming's data-driven decisions. Learn how we streamlined our architecture and unlocked new efficiencies by leveraging cutting-edge Databricks features such as liquid clustering, Photon with dynamic file pruning, Delta's identity column, Unity Catalog and more — transforming our implementation into a simpler, more scalable solution. Join us on this thrilling journey as we navigate the twists and turns of designing and implementing a new Medallion at scale — the very heartbeat of our streaming business!

Next-Gen Data Science: How Posit and Databricks Are Transforming Analytics at Scale

Next-Gen Data Science: How Posit and Databricks Are Transforming Analytics at Scale

2025-06-11 Watch
lightning_talk
James Blair (Posit, PBC)

Modern data science teams face the challenge of navigating complex landscapes of languages, tools and infrastructure. Positron, Posit’s next-generation IDE, offers a powerful environment tailored for data science, seamlessly integrating with Databricks to empower teams working in Python and R. Now integrated within Posit Workbench, Positron enables data scientists to efficiently develop, iterate and analyze data with Databricks — all while maintaining their preferred workflows. In this session, we’ll explore how Python and R users can develop, deploy and scale their data science workflows by combining Posit tools with Databricks. We’ll showcase how Positron simplifies development for both Python and R and how Posit Connect enables seamless deployment of applications, reports and APIs powered by Databricks. Join us to see how Posit + Databricks create a frictionless, scalable and collaborative data science experience — so your teams can focus on insights, not infrastructure.

No-Trust, All Value: Monetizing Analytics With Databricks Clean Rooms

No-Trust, All Value: Monetizing Analytics With Databricks Clean Rooms

2025-06-11 Watch
lightning_talk
Eddie Edgeworth (Koantek)

In a world where data collaboration is essential but trust is scarce, Databricks Clean Rooms delivers a game-changing model: no data shared, all value gained. Discover how data providers can unlock new revenue streams by launching subscription-based analytics and “Built-on-Databricks” services that run on customer data — without exposing raw data or violating compliance. Clean Rooms integrates Unity Catalog’s governance, Delta Sharing’s secure exchange and serverless compute to enable true multi-party collaboration — without moving data. See how privacy-preserving models like fraud detection, clinical analytics and ad measurement become scalable, productizable and monetizable across industries. Walk away with a proven pattern to productize analytics, preserve compliance and turn trustless collaboration into recurring revenue.

Scaling Blockchain ML With Databricks: From Graph Analytics to Graph Machine Learning

Scaling Blockchain ML With Databricks: From Graph Analytics to Graph Machine Learning

2025-06-11 Watch
lightning_talk
Indra Rustandi (Coinbase)

Coinbase leverages Databricks to scale ML on blockchain data, turning vast transaction networks into actionable insights. This session explores how Databricks’ scalable infrastructure, powered by Delta Lake, enables real-time processing for ML applications like NFT floor price predictions. We’ll show how GraphFrames helps us analyze billion-node transaction graphs (e.g., Bitcoin) for clustering and fraud detection, uncovering structural patterns in blockchain data. But traditional graph analytics has limits. We’ll go further with Graph Neural Networks (GNNs) using Kumo AI, which learn from the transaction network itself rather than relying on hand-engineered features. By encoding relationships directly into the model, GNNs adapt to new fraud tactics, capturing subtle relationships that evolve over time. Join us to see how Coinbase is advancing blockchain ML with Databricks and deep learning on graphs.

Sponsored by: Acceldata | Agentic Data Management: Trusted Data for Enterprise AI on Databricks

Sponsored by: Acceldata | Agentic Data Management: Trusted Data for Enterprise AI on Databricks

2025-06-11 Watch
lightning_talk
Joe Murphy (Acceldata)

An intelligent, action-driven approach to bridge Data Engineering and AI/ML workflows, delivering continuous data trust through comprehensive monitoring, validation, and remediation across the entire Databricks data lifecycle. Learn how Acceldata’s Agentic Data Management (ADM) platform: Ensures end-to-end data reliability across Databricks from ingestion, transformation, feature engineering, and model deployment. Bridges data engineering and AI teams by providing unified insights across Databricks jobs, notebooks and pipelines with proactive data insights and actions. Accelerates the delivery of trustworthy enterprise AI outcomes by detecting multi-variate anomalies, monitoring feature drift, and maintaining lineage within Databricks-native environments.

Sponsored by: dbt Labs | Leveling Up Data Engineering at Riot: How We Rolled Out dbt and Transformed the Developer Experience

Sponsored by: dbt Labs | Leveling Up Data Engineering at Riot: How We Rolled Out dbt and Transformed the Developer Experience

2025-06-11 Watch
lightning_talk
Marco Garcia (Riot Games)

Riot Games reduced its Databricks compute spend and accelerated development cycles by transforming its data engineering workflows—migrating from bespoke Databricks notebooks and Spark pipelines to a scalable, testable, and developer-friendly dbt-based architecture. In this talk, members of the Developer Experience & Automation (DEA) team will walk through how they designed and operationalized dbt to support Riot’s evolving data needs.

Sponsored by: Google Cloud | Powering AI & Analytics: Innovations in Google Cloud Storage for Data Lakes

Sponsored by: Google Cloud | Powering AI & Analytics: Innovations in Google Cloud Storage for Data Lakes

2025-06-11 Watch
lightning_talk
Jason Wu (Google)

Enterprise customers need a powerful and adaptable data foundation to navigate demands of AI and multi-cloud environments. This session dives into how Google Cloud Storage serves as a unified platform for modern analytics data lakes, together with Databricks. Discover how Google Cloud Storage provides key innovations like performance optimizations for Apache Iceberg, Anywhere Cache as the easiest way to colocate storage and compute, Rapid Storage for ultra low latency object reads and appends, and Storage Intelligence for vital data insights and recommendations. Learn how you can optimize your infrastructure to unlock the full value of your data for AI-driven success.

Sponsored by: Qubika | Agentic AI In Finance: How To Build Agents Using Databricks And LangGraph

Sponsored by: Qubika | Agentic AI In Finance: How To Build Agents Using Databricks And LangGraph

2025-06-11 Watch
lightning_talk
Sebastian Diaz (Qubika)

Join us for this session on how to build AI finance agents with Databricks and LangChain. This session introduces a powerful approach to building AI agents by combining a modular framework that integrates LangChain, retrieval-augmented generation (RAG), and Databricks' unified data platform to build intelligent, adaptable finance agents. We’ll walk through the architecture and key components, including Databricks Unity Catalog, ML Flow, and Mosaic AI involved in building a system tailored for complex financial tasks like portfolio analysis, reporting automation, and real-time risk insights. We’ll also showcase a demo of one such agent in action - a Financial Analyst Agent. This agent emulates the expertise of a seasoned data analyst, delivering in-depth analysis in seconds - eliminating the need to wait hours or days for manual reports. The solution provides organizations with 24/7 access to advanced data analysis, enabling faster, smarter decision-making.

A Japanese Mega-Bank’s Journey to a Modern, GenAI-Powered, Governed Data Platform

A Japanese Mega-Bank’s Journey to a Modern, GenAI-Powered, Governed Data Platform

2025-06-11 Watch
talk
Anshul Wadhawan (Deloitte Consulting LLP) , Gordon Wilson (Sumitomo Mitsui Banking Corporation)

SMBC, a major Japanese multinational financial services institution, has embarked on an initiative to build a GenAI-powered, modern and well-governed cloud data platform on Azure/Databricks. This initiative aims to build an enterprise data foundation encompassing loans, deposits, securities, derivatives, and other data domains. Its primary goals are: To decommission legacy data platforms and reduce data sprawl by migrating 20+ core banking systems to a multi-tenant Azure Databricks architecture To leverage Databrick’s delta-share capabilities to address SMBC’s unique global footprint and data sharing needs To govern data by design using Unity Catalog To achieve global adoption of the frameworks, accelerators, architecture and tool stack to support similar implementations across EMEA Deloitte and SMBC leveraged the Brickbuilder asset “Data as a Service for Banking” to accelerate this highly strategic transformation.

American Airlines Flies to New Heights with Data Intelligence

American Airlines Flies to New Heights with Data Intelligence

2025-06-11 Watch
talk
Saimahesh Chava (American Airlines) , Yash Joshi (American Airlines)

American Airlines migrated from Hive Metastore to Unity Catalog using automated processes with Databricks APIs and GitHub Actions. This automation streamlined the migration for many applications within AA, ensuring consistency, efficiency and minimal disruption while enhancing data governance and disaster recovery capabilities.

Building and Scaling Production AI Systems With Mosaic AI

Building and Scaling Production AI Systems With Mosaic AI

2025-06-11 Watch
talk
Kasey Uhlenhuth (Databricks)

Ready to go beyond the basics of Mosaic AI? This session will walk you through how to architect and scale production-grade AI systems on the Databricks Data Intelligence Platform. We’ll cover practical techniques for building end-to-end AI pipelines — from processing structured and unstructured data to applying Mosaic AI tools and functions for model development, deployment and monitoring. You’ll learn how to integrate experiment tracking with MLflow, apply performance tuning and use built-in frameworks to manage the full AI lifecycle. By the end, you’ll be equipped to design, deploy and maintain AI systems that deliver measurable outcomes at enterprise scale.

Building Tool-Calling Agents With Databricks Agent Framework and MCP

Building Tool-Calling Agents With Databricks Agent Framework and MCP

2025-06-11 Watch
talk
Siddharth Murching (Databricks) , Elise Gonzales (Databricks)

Want to create AI agents that can do more than just generate text? Join us to explore how combining Databricks' Mosaic AI Agent Framework with the Model Context Protocol (MCP) unlocks powerful tool-calling capabilities. We'll show you how MCP provides a standardized way for AI agents to interact with external tools, data and APIs, solving the headache of fragmented integration approaches. Learn to build agents that can retrieve both structured and unstructured data, execute custom code and tackle real enterprise challenges. Key takeaways: Implementing MCP-enabled tool-calling in your AI agents Prototyping in AI Playground and exporting for deployment Integrating Unity Catalog functions as agent tools Ensuring governance and security for enterprise deployments Whether you're building customer service bots or data analysis assistants, you'll leave with practical know-how to create powerful, governed AI agents.

ClickHouse and Databricks for Real-Time Analytics

ClickHouse and Databricks for Real-Time Analytics

2025-06-11 Watch
talk
Melvyn Peignon (ClickHouse)

ClickHouse is a C++ based, column-oriented database built for real-time analytics. While it has its own internal storage format, the rise of open lakehouse architectures has created a growing need for seamless interoperability. In response, we have developed integrations with your favorite lakehouse ecosystem to enhance compatibility, performance and governance. From integrating with Unity Catalog to embedding the Delta Kernel into ClickHouse, this session will explore the key design considerations behind these integrations, their benefits to the community, the lessons learned and future opportunities for improved compatibility and seamless integration.