talk-data.com talk-data.com

Event

Data + AI Summit 2025

2025-06-09 – 2025-06-13 Databricks Summit Visit website ↗

Activities tracked

86

Filtering by: Cloud Computing ×

Sessions & talks

Showing 76–86 of 86 · Newest first

Search within this event →
Sponsored by: Amperity | Transforming Guest Experiences: GoTo Foods’ Data Journey with Amperity & Databricks

Sponsored by: Amperity | Transforming Guest Experiences: GoTo Foods’ Data Journey with Amperity & Databricks

2025-06-10 Watch
talk
Brett Newcome (GoTo Foods) , Manuel Valdes (GoTo Foods)

GoTo Foods, the platform company behind brands like Auntie Anne’s, Cinnabon, Jamba, and more, set out to turn a fragmented data landscape into a high-performance customer intelligence engine. In this session, CTO Manuel Valdes and Director of Marketing Technology Brett Newcome share how they unified data using Databricks Delta Sharing and Amperity’s Customer Data Cloud to speed up time to market. As part of GoTo’s broader strategy to support its brands with shared enterprise tools, the team: Unified loyalty, catering, and retail data into one customer view Cut campaign lead times from weeks to hours Activated audiences in real time without straining engineering Unlocked new revenue through smarter segmentation and personalization

Sponsored by: Sigma | Moving from On-premises to Unified Business Intelligence with Databricks & Sigma

Sponsored by: Sigma | Moving from On-premises to Unified Business Intelligence with Databricks & Sigma

2025-06-10 Watch
talk
Zalak Trivedi (Sigma Computing) , Todd Keyser (Saddle Creek Logistics Services)

Faced with the limitations of a legacy, on-prem data stack and scalability bottlenecks in MicroStrategy, Saddle Creek Logistics Services needed a modern solution to handle massive data volumes and accelerate insight delivery. By migrating to a cloud-native architecture powered by Sigma and Databricks, the team achieved significant performance gains and operational efficiency. In this session, Saddle Creek will walk through how they leveraged Databricks’ cloud-native processing engine alongside a unified governance layer through Unity Catalog to streamline and secure downstream analytics in Sigma. Learn how embedded dashboards and near real-time reporting—cutting latency from 9 minutes to just 3 seconds—have empowered data-driven collaboration with external partners and driven a major effort to consolidate over 30,000 reports and objects to under 1,000.

Redesigning Kaizen's Cloud Data Lake for the Future

Redesigning Kaizen's Cloud Data Lake for the Future

2025-06-10 Watch
talk
Triantafyllos Tsakmakis (Kaizen Gaming) , Nikolaos Michail (Kaizen Gaming)

At Kaizen Gaming, data drives our decision-making, but rapid growth exposed inefficiencies in our legacy cloud setup — escalating costs, delayed insights and scalability limits. Operating in 18 countries with 350M daily transactions (1PB+), shared quotas and limited cost transparency hindered efficiency. To address this, we redesigned our cloud architecture with Data Landing Zones, a modular framework that decouples resources, enabling independent scaling and cost accountability. Automation streamlined infrastructure, reduced overhead and enhanced FinOps visibility, while Unity Catalog ensured governance and security. Migration challenges included maintaining stability, managing costs and minimizing latency. A phased approach, Delta Sharing, and DBx Asset Bundles simplified transitions. The result: faster insights, improved cost control and reduced onboarding time, fostering innovation and efficiency. We share our transformation, offering insights for modern cloud optimization.

Sponsored by: Atlan | How Fox & Atlan are Partnering to Make Metadata a Common System of Trust, Context, and Governance

Sponsored by: Atlan | How Fox & Atlan are Partnering to Make Metadata a Common System of Trust, Context, and Governance

2025-06-10 Watch
talk
Prukalpa Sankar (Atlan) , Oliver Gomes (Fox Corporation)

With hundreds of millions viewing broadcasts from news to sports, Fox relies on a sophisticated and trusted architecture ingesting 100+ data sources, carefully governed to improve UX across products, drive sales and marketing, and ensure KPI tracking. Join Oliver Gomes, VP of Enterprise and Data Platform at Fox, and Prukalpa Sankar of Atlan to learn how true partnership helps their team navigate opportunities from Governance to AI. To govern and democratize their multi-cloud data platform, Fox chose Atlan to make data accessible and understandable for more users than ever before. Their team then used a data product approach to create a shared language using context from sources like Unity Catalog at a single point of access, no matter the underlying technology. Now, Fox is defining an ambitious future for Metadata. With Atlan and Iceberg driving interoperability, their team prepares to build a “control plane”, creating a common system of trust and governance.

Data Management and Governance With UC

2025-06-10
talk

In this course, you'll learn concepts and perform labs that showcase workflows using Unity Catalog - Databricks' unified and open governance solution for data and AI. We'll start off with a brief introduction to Unity Catalog, discuss fundamental data governance concepts, and then dive into a variety of topics including using Unity Catalog for data access control, managing external storage and tables, data segregation, and more. Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.) Labs: Yes Certification Path: Databricks Certified Data Engineer Associate

Deploy Workloads with Lakeflow Jobs (previously Databricks Workflows)

2025-06-10
talk

In this course, you’ll learn how to orchestrate data pipelines with Lakeflow Jobs (previously Databricks Workflows) and schedule dashboard updates to keep analytics up-to-date. We’ll cover topics like getting started with Lakeflow Jobs, how to use Databricks SQL for on-demand queries, and how to configure and schedule dashboards and alerts to reflect updates to production data pipelines. Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.) Labs: No Certification Path: Databricks Certified Data Engineer Associate

Lakeflow Connect: Smarter, Simpler File Ingestion With the Next Generation of Auto Loader

Lakeflow Connect: Smarter, Simpler File Ingestion With the Next Generation of Auto Loader

2025-06-10 Watch
talk
Sandip Agarwala (Databricks) , Chavdar Botev (Databricks)

Auto Loader is the definitive tool for ingesting data from cloud storage into your lakehouse. In this session, we’ll unveil new features and best practices that simplify every aspect of cloud storage ingestion. We’ll demo out-of-the-box observability for pipeline health and data quality, walk through improvements for schema management, introduce a series of new data formats and unveil recent strides in Auto Loader performance. Along the way, we’ll provide examples and best practices for optimizing cost and performance. Finally, we’ll introduce a preview of what’s coming next — including a REST API for pushing files directly to Delta, a UI for creating cloud storage pipelines and more. Join us to help shape the future of file ingestion on Databricks.

Build Data Pipelines with Lakeflow Declarative Pipelines

2025-06-09
talk

In this course, you’ll learn how to define and schedule data pipelines that incrementally ingest and process data through multiple tables on the Data Intelligence Platform, using Lakeflow Declarative Pipelines in Spark SQL and Python. We’ll cover topics like how to get started with Lakeflow Declarative Pipelines, how Lakeflow Declarative Pipelines tracks data dependencies in data pipelines, how to configure and run data pipelines using the Lakeflow Declarative Pipelines. UI, how to use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables on the Data Intelligence Platform, using Auto Loader and Lakeflow Declarative Pipelines, how to use APPLY CHANGES INTO syntax to process Change Data Capture feeds, and how to review event logs and data artifacts created by pipelines and troubleshoot syntax.By streamlining and automating reliable data ingestion and transformation workflows, this course equips you with the foundational data engineering skills needed to help kickstart AI use cases. Whether you're preparing high-quality training data or enabling real-time AI-driven insights, this course is a key step in advancing your AI journey.Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.)Labs: NoCertification Path: Databricks Certified Data Engineer Associate

Data Warehousing with Databricks

Data Warehousing with Databricks

2025-06-09 Watch
talk

This course is designed for data professionals who want to explore the data warehousing capabilities of Databricks. Assuming no prior knowledge of Databricks, it provides an introduction to leveraging Databricks as a modern cloud-based data warehousing solution. Learners will explore how use the Databricks Data Intelligence Platform to ingest, transform, govern, and analyze data efficiently. Learners will also explore Genie, an innovative Databricks feature that simplifies data exploration through natural language queries. By the end of this course, participants will be equipped with the foundational skills to implement and optimize a data warehouse using Databricks. Pre-requisites: Basic understanding of SQL and data querying concepts General knowledge of data warehousing concepts, including tables, schemas, and ETL/ELT processes is recommended Some experience with BI and/or data visualization tools is helpful but not required Labs: Yes

Data Ingestion with Lakeflow Connect

Data Ingestion with Lakeflow Connect

2025-06-09 Watch
talk

In this course, you’ll learn how to have efficient data ingestion with Lakeflow Connect and manage that data. Topics include ingestion with built-in connectors for SaaS applications, databases and file sources, as well as ingestion from cloud object storage, and batch and streaming ingestion. We'll cover the new connector components, setting up the pipeline, validating the source and mapping to the destination for each type of connector. We'll also cover how to ingest data with Batch to Streaming ingestion into Delta tables, using the UI with Auto Loader, automating ETL with Lakeflow Declarative Pipelines or using the API.This will prepare you to deliver the high-quality, timely data required for AI-driven applications by enabling scalable, reliable, and real-time data ingestion pipelines. Whether you're supporting ML model training or powering real-time AI insights, these ingestion workflows form a critical foundation for successful AI implementation.Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.Labs: NoCertification Path: Databricks Certified Data Engineer Associate

Get started with Data Warehousing

talk

This course provides a comprehensive overview of Databricks’ modern approach to data warehousing, highlighting how a data lakehouse architecture combines the strengths of traditional data warehouses with the flexibility and scalability of the cloud. You’ll learn about the AI-driven features that enhance data transformation and analysis on the Databricks Data Intelligence Platform. Designed for data warehousing practitioners, this course provides you with the foundational information needed to begin building and managing high-performant, AI-powered data warehouses on Databricks. This course is designed for those starting out in data warehousing and those who would like to execute data warehousing workloads on Databricks. Participants may also include data warehousing practitioners who are familiar with traditional data warehousing techniques and concepts and are looking to expand their understanding of how data warehousing workloads are executed on Databricks.