talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

509

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Data + AI Summit 2025 ×

This course introduces learners to deploying, operationalizing, and monitoring generative artificial intelligence (AI) applications. First, learners will develop knowledge and skills in deploying generative AI applications using tools like Model Serving. Next, the course will discuss operationalizing generative AI applications following modern LLMOps best practices and recommended architectures. Finally, learners will be introduced to the idea of monitoring generative AI applications and their components using Lakehouse Monitoring. Pre-requisites: Familiarity with prompt engineering and retrieval-augmented generation (RAG) techniques, including data preparation, embeddings, vectors, and vector databases. A foundational knowledge of Databricks Data Intelligence Platform tools for evaluation and governance (particularly Unity Catalog). Labs: Yes Certification Path: Databricks Certified Generative AI Engineer Associate

This course will guide participants through a comprehensive exploration of machine learning model operations, focusing on MLOps and model lifecycle management. The initial segment covers essential MLOps components and best practices, providing participants with a strong foundation for effectively operationalizing machine learning models. In the latter part of the course, we will delve into the basics of the model lifecycle, demonstrating how to navigate it seamlessly using the Model Registry in conjunction with the Unity Catalog for efficient model management. By the course's conclusion, participants will have gained practical insights and a well-rounded understanding of MLOps principles, equipped with the skills needed to navigate the intricate landscape of machine learning model operations. Pre-requisites: Familiarity with Databricks workspace and notebooks, familiarity with Delta Lake and Lakehouse, intermediate level knowledge of Python (e.g. understanding of basic MLOps concepts and practices as well as infrastructure and importance of monitoring MLOps solutions) Labs: Yes Certification Path: Databricks Certified Machine Learning Associate

MLOps With Databricks

Adopting MLOps is getting increasingly important with the rise of AI. A lot of different features are required to do MLOps in large organizations. In the past, you had to implement these features yourself. Luckily, the MLOps space is getting more mature, and end-to-end platforms like Databricks provide most of the features. In this talk, I will walk through the MLOps components and how you can simplify your processes using Databricks. Audio for this session is delivered in the conference mobile app, you must bring your own headphones to listen.

Pushing the Limits of What Your Warehouse Can Do Using Python and Databricks

SQL warehouses in Databricks can run more than just SQL. Join this session to learn how to get more out of your SQL warehouses and any tools built on top of it by leveraging Python. After attending this session, you will be familiar with Python user-defined functions and how to bring in custom dependencies from PyPi, as a custom wheel or even securely invoke cloud services with performance at scale.

ReguBIM AI – Transforming BIM, Engineering, and Code Compliance with Generative AI

At Exyte, we design, engineer, and deliver ultra-clean and sustainable facilities for high-tech industries. One of the most complex tasks our engineers and designers face is ensuring that their building designs comply with constantly evolving codes and regulations – often a manual, error-prone process. To address this, we developed ReguBIM AI, a generative AI-powered assistant that helps our teams verify code compliance more efficiently and accurately by linking 3D Building Information Modeling (BIM) data with regulatory documents. Built on the Databricks Data Intelligence Platform, ReguBIM AI is part of our broader vision to apply AI meaningfully across engineering and design processes. We are proud to share that ReguBIM AI won the Grand Prize and EMEA Winner titles at the Databricks GenAI World Cup 2024 — a global hackathon that challenged over 1,500 data scientists and AI engineers from 18 countries to create innovative generative AI solutions for real-world problems.

Scaling GenAI Inference From Prototype to Production: Real-World Lessons in Speed & Cost

This lightning talk dives into real-world GenAI projects that scaled from prototype to production using Databricks’ fully managed tools. Facing cost and time constraints, we leveraged four key Databricks features—Workflows, Model Serving, Serverless Compute, and Notebooks—to build an AI inference pipeline processing millions of documents (text and audiobooks). This approach enables rapid experimentation, easy tuning of GenAI prompts and compute settings, seamless data iteration and efficient quality testing—allowing Data Scientists and Engineers to collaborate effectively. Learn how to design modular, parameterized notebooks that run concurrently, manage dependencies and accelerate AI-driven insights. Whether you're optimizing AI inference, automating complex data workflows or architecting next-gen serverless AI systems, this session delivers actionable strategies to maximize performance while keeping costs low.

Sponsored by: EY | Xoople: Fueling enterprise AI with Earth data intelligence products

Xoople aims to provide its users with trusted AI-Ready Earth data and accelerators that unlock new insights for enterprise AI. With access to scientific-grade Earth data that provides spatial intelligence on real-world changes, data scientists and BI analysts can increase forecast accuracy for their enterprise processes and models. These improvements drive smarter, data-driven business decisions across various business functions, including supply chain, finance, and risk across industries. Xoople, which has recently introduced their product, Enterprise AI-Ready Earth Data™, on the Databricks Marketplace, will have their CEO, Fabrizio Pirondini, discuss the importance of the Databricks Data Intelligence Platform in making Xoople’s product a reality for use in the enterprise.

Sponsored by: Impetus | Supercharge AI with automated migration to Databricks with Impetus

Migrating legacy workloads to a modern, scalable platform like Databricks can be complex and resource-intensive. Impetus, an Elite Databricks Partner and the Databricks Migration Partner of the Year 2024, simplifies this journey with LeapLogic, an automated solution for data platform modernization and migration services. LeapLogic intelligently discovers, transforms, and optimizes workloads for Databricks, ensuring minimal risk and faster time-to-value. In this session, we’ll showcase real-world success stories of enterprises that have leveraged Impetus’ LeapLogic to modernize their data ecosystems efficiently. Join us to explore how you can accelerate your migration journey, unlock actionable insights, and future-proof your analytics with a seamless transition to Databricks.

Bridging Ontologies & Lakehouses: Palantir AIP + Databricks for Secure Autonomous AI

AI is moving from pilots to production, but many organizations still struggle to connect boardroom ambitions with operational reality. Palantir’s Artificial Intelligence Platform (AIP) and the Databricks Data Intelligence Platform now form a single, open architecture that closes this gap by pairing Palantir’s operational decision empowering Ontology- with Databricks’ industry-leading scale, governance and Lakehouse economics. The result: real-time, AI-powered, autonomous workflows that are already powering mission-critical outcomes for the U.S. Department of Defense, bp and other joint customers across the public and private sectors. In this technically grounded but business-focused session you will see the new reference architecture in action. We will walk through how Unity Catalog and Palantir Virtual Tables provide governed, zero-copy access to Lakehouse data and back mission-critical operational workflows on top of Palantir’s semantic ontology and agentic AI capabilities. We will also explore how Palantir’s no-code and pro-code tooling integrates with Databricks compute to orchestrate builds and write tables to Unity Catalog. Come hear from customers currently using this architecture to drive critical business outcomes seamlessly across Databricks and Palantir.

Databricks Apps: Turning Data and AI Into Practical, User-Friendly Applications

This session is repeated. In this session, we present an overview of the GA release of Databricks Apps, the new app hosting platform that integrates all the Databricks services necessary to build production-ready data and AI applications. With Apps, data and developer teams can build new interfaces into the data intelligence platform, further democratizing the transformative power of data and AI across the organization. We'll cover common use cases, including RAG chat apps, interactive visualizations and custom workflow builders, as well as look at several best practices and design patterns when building apps. Finally, we'll look ahead with the vision, strategy and roadmap for the year ahead.

Enabling Sleep Science Research With Databricks and Delta Sharing

Leveraging Databricks as a platform, we facilitate the sharing of anonymized datasets across various Databricks workspaces and accounts, spanning multiple cloud environments such as AWS, Azure, and Google Cloud. This capability, powered by Delta Sharing, extends both within and outside Sleep Number, enabling accelerated insights while ensuring compliance with data security and privacy standards. In this session, we will showcase our architecture and implementation strategy for data sharing, highlighting the use of Databricks’ Unity Catalog and Delta Sharing, along with integration with platforms like Jira, Jenkins, and Terraform to streamline project management and system orchestration.

From Datavault to Delta Lake: Streamlining Data Sync with Lakeflow Connect

In this session, we will explore the Australian Red Cross Lifeblood's approach to synchronizing an Azure SQL Datavault 2.0 (DV2.0) implementation with Unity Catalog (UC) using Lakeflow Connect. Lifeblood's DV2.0 data warehouse, which includes raw vault (RV) and business vault (BV) tables, as well as information marts defined as views, required a multi-step process to achieve data/business logic sync with UC. This involved using Lakeflow Connect to ingest RV and BV data, followed by a custom process utilizing JDBC to ingest view definitions, and the automated/manual conversion of T-SQL to Databricks SQL views, with Lakehouse Monitoring for validation. In this talk, we will share our journey, the design decisions we made, and how the resulting solution now supports analytics workloads, analysts, and data scientists at Lifeblood.

How United Airlines Transforms SWIM Data into Real-Time Operational Insight and Faster Decision Making

Discover how United Airlines, in collaboration with Databricks and Impetus Technologies, has built a next-generation data intelligence platform leveraging System Wide Information Management (SWIM) to deliver mission-critical, real-time insights for flight disruption prediction, situational analysis, and smarter, faster decision-making. In this session, United Airlines experts will share how their Databricks-based SWIM architecture enables near real-time operational awareness, enhances responsiveness during irregular operations (IRROPs), and drives proactive actions to minimize disruptions. They will also discuss how United efficiently processes and manages the large volume and variety of SWIM data, ensuring seamless integration and actionable intelligence across their operations.

Mastering Data Security and Compliance: CoorsTek's Journey With Databricks Unity Catalog

Ensuring data security & meeting compliance requirements are critical priorities for businesses operating in regulated industries, where the stakes are high and the standards are stringent. We will showcase how CoorsTek, a global leader in technical ceramics MFG, partnered with Databricks to leverage the power of UC for addressing regulatory challenges while achieving significant operational efficiency gains. We'll dive into the migration journey, highlighting the adoption of key features such as RBAC, comprehensive data lineage tracking and robust auditing capabilities. Attendees will gain practical insights into the strategies and tools used to manage sensitive data, ensure compliance with industry standards and optimize cloud data architectures. Additionally, we’ll share real-world lessons learned, best practices for integrating compliance into a modern data ecosystem and actionable takeaways for leveraging Databricks as a catalyst for secure and compliant data innovation.

Patients Are Waiting...Accelerating Healthcare Innovation with Data, AI and Agents

This session is repeated. In an era of exponential data growth, organizations across industries face common challenges in transforming raw data into actionable insights. This presentation showcases how Novo Nordisk is pioneering insights generation approaches to clinical data management and AI. Using our clinical trials platform FounData, built on Databricks, we demonstrate how proper data architecture enables advanced AI applications. We'll introduce a multi-agent AI framework that revolutionizes data interaction, combining specialized AI agents to guide users through complex datasets. While our focus is on clinical data, these principles apply across sectors – from manufacturing to financial services. Learn how democratizing access to data and AI capabilities can transform organizational efficiency while maintaining governance. Through this real-world implementation, participants will gain insights on building scalable data architectures and leveraging multi-agent AI frameworks for responsible innovation.

As a global energy leader, Petrobras relies on machine learning to optimize operations, but manual model deployment and validation processes once created bottlenecks that delayed critical insights. In this session, we’ll reveal how we revolutionized our MLOps framework using MLflow, Databricks Asset Bundles (DABs) and Unity Catalog to: Replace error-prone manual validation with automated metric-driven workflows Reduce model deployment timelines from days to hours Establish granular governance and reproducibility across production models Discover how we enabled data scientists to focus on innovation—not infrastructure—through standardized pipelines while ensuring compliance and scalability in one of the world’s most complex energy ecosystems.

Swimming at Our Own Lakehouse: How Databricks Uses Databricks

This session is repeated. Peek behind the curtain to learn how Databricks processes hundreds of petabytes of data across every region and cloud where we operate. Learn how Databricks leverages Data and AI to scale and optimize every aspect of the company. From facilities and legal to sales and marketing and of course product research and development. This session is a high-level tour inside Databricks to see how Data and AI enable us to be a better company. We will go into the architecture of things for how Databricks is used for internal use cases like business analytics and SIEM as well as customer-facing features like system tables and assistant. We will cover how data production of our data flow and how we maintain security and privacy while operating a large multi-cloud, multi-region environment.

As global data privacy regulations tighten, balancing user data protection with maximizing its business value is crucial.This presentation explores how integrating Databricks into our connected-vehicle data platform enhances both governance and business outcomes. We’ll highlight a case where migrating from EMR to Databricks improved deletion performance and cut costs by 99% with Delta Lake. This shift not only ensures compliance with data-privacy regulations but also maximizes the potential of connected-vehicle data. We are developing a platform that balances compliance with business value and sets a global standard for data usage, inviting partners to join us in building a secure, efficient mobility ecosystem.

Trillions of Data Records, Zero Bottlenecks for Investor Decision-Making

In finance, every second counts. That’s why the Data team at J. Goldman & Co. needed to transform trillions of real-time market data records into a single, actionable insight — instantly, and without waiting on development resources. By modernizing their internal data platform with a scalable architecture, they built a streamlined, web-native alternative data interface that puts live market data directly in the hands of investment teams. With Databricks’ computational power and Unity Catalog’s secure governance, they eliminated bottlenecks and achieved the fastest time-to-market for critical investor decisions possible. Learn how J. Goldman & Co. Innovates with Databricks and Sigma to: Ensure live, scalable data access across trillions of records in a flexible UI Empower non-technical teams with true self-service data exploration

Trust You Can Measure: Data Quality Standards in The Lakehouse

Do you trust your data? If you’ve ever struggled to figure out which datasets are reliable, well-governed, or safe to use, you’re not alone. At Databricks, our own internal lakehouse faced the same challenge—hundreds of thousands of tables, but no easy way to tell which data met quality standards. In this talk, the Databricks Data Platform team shares how we tackled this problem by building the Data Governance Score—a way to systematically measure and surface trust signals across the entire lakehouse. You’ll learn how we leverage Unity Catalog, governed tags, and enforcement to drive better data decisions at scale. Whether you're a data engineer, platform owner, or business leader, you’ll leave with practical ideas on how to raise the bar for data quality and trust in your own data ecosystem.