talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

509

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Data + AI Summit 2025 ×
Driving Secure AI Innovation with Obsidian Security, Databricks, and PointGuard AI

As enterprises adopt AI and Large Language Models (LLMs), securing and governing these models - and the data used to train them - is essential. In this session, learn how Databricks Partner PointGuard AI helps organizations implement the Databricks AI Security Framework to manage AI-specific risks, ensuring security, compliance, and governance across the entire AI lifecycle. Then, discover how Obsidian Security provides a robust approach to AI security, enabling organizations to confidently scale AI applications.

End-to-End Interoperable Data Platform: How Bosch Leverages Databricks Supply Chain Consolidation

This session will showcase Bosch’s journey in consolidating supply chain information using the Databricks platform. It will dive into how Databricks not only acts as the central data lakehouse but also integrates seamlessly with transformative components such as dbt and Large Language Models (LLMs). The talk will highlight best practices, architectural considerations, and the value of an interoperable platform in driving actionable insights and operational excellence across complex supply chain processes. Key Topics and Sections Introduction & Business Context Brief Overview of Bosch’s Supply Chain Challenges and the Need for a Consolidated Data Platform. Strategic Importance of Data-Driven Decision-Making in a Global Supply Chain Environment. Databricks as the Core Data Platform Integrating dbt for Transformation Leveraging LLM Models for Enhanced Insights

Entity Resolution for the Best Outcomes on Your Data

There are many ways to implement entity resolution (ER) system — both using vendor software and open-source libraries that enable DIY Entity Resolution. However, generally we see common challenges with any approach — scalability, bound to a single model architecture, lack of metrics and explainability, and stagnant implementations that do not "learn" with experience. Recent experiments with transformer-based approaches, fast lookups with vector search and Databricks components such as Databricks Apps and Agent Eval provide the foundations for a composable ER system that can get better with time on your data. In this presentation, we include a demo of how to use these components to build a composable ER that has the best outcomes for your data.

Evolving Data Insights With Privacy at Mastercard

Mastercard is a global technology company whose role is anchored in trust. It supports 3.4 billion cards and over 143 billion transactions annually. To address customers’ increasing data volume and complex privacy needs, Mastercard has developed a novel service atop Databricks’ Clean Rooms and broader Data Intelligence Platform. This service combines several Databricks components with Mastercard’s IP, providing an evolved method for data-driven insights and value-added services while ensuring a unique standalone turnkey service. The result is a secure environment where multiple parties can collaborate on sensitive data without directly accessing each other’s information. After this session, attendees will understand how Mastercard used its expertise in privacy-enhancing technologies to create collaboration tools powered by Databricks’ Clean Rooms, AI/BI, Apps, Unity Catalog, Workflows and DatabricksIQ — as well as how to take advantage of this new privacy-enhancing service directly.

Looking for a practical workshop on building an AI Agent on Databricks? Well, we have just the thing for you.This hands-on workshop takes you through the process of creating intelligent agents that can reason their way to useful outcomes. You'll start by building your own toolkit of SQL and Python functions that give your agent practical capabilities. Then we'll explore how to select the right foundation model for your needs, connect your custom tools, and watch as your agent tackles complex challenges through visible reasoning paths.The workshop doesn't just stop at building—you'll dive into evaluation techniques using evaluation datasets to identify where your agent shines and where it needs improvement. After implementing and measuring your changes, we'll explore deployment strategies, including a feedback collection interface that enables continuous improvement and governance mechanisms to ensure responsible AI usage in production environments.

Most organizations run complex cloud data architectures that silo applications, users and data. Join this interactive hands-on workshop to learn how Databricks SQL allows you to operate a multi-cloud lakehouse architecture that delivers data warehouse performance at data lake economics — with up to 12x better price/performance than traditional cloud data warehouses. Here’s what we’ll cover: How Databricks SQL fits in the Data Intelligence Platform, enabling you to operate a multicloud lakehouse architecture that delivers data warehouse performance at data lake economics How to manage and monitor compute resources, data access and users across your lakehouse infrastructure How to query directly on your data lake using your tools of choice or the built-in SQL editor and visualizations How to use AI to increase productivity when querying, completing code or building dashboards Ask your questions during this hands-on lab, and the Databricks experts will guide you.

Harnessing Databricks Asset Bundles: Transforming Pipeline Management at Scale at Stack Overflow

Discover how Stack Overflow optimized its data engineering workflows using Databricks Asset Bundles (DABs) for scalable and efficient pipeline deployments. This session explores the structured pipeline architecture, emphasizing code reusability, modular design and bundle variables to ensure clarity and data isolation across projects. Learn how the data team leverages enterprise infrastructure to streamline deployment across multiple environments. Key topics include DRY-principled modular design, essential DAB features for automation and data security strategies using Unity Catalog. Designed for data engineers and teams managing multi-project workflows, this talk offers actionable insights on optimizing pipelines with Databricks evolving toolset.

How the Texas Rangers Use a Unified Data Platform to Drive World Class Baseball Analytics

Don't miss this session where we demonstrate how the Texas Rangers baseball team is staying one step ahead of the competition by going back to the basics. After implementing a modern data strategy with Databricks and winnng the 2023 World Series the rest of the league quickly followed suit. Now more than ever, data and AI are a central pillar of every baseball team's strategy driving profound insights into player performance and game dynamics. With a 'fundamentals win games' back to the basics focus, join us as we explain our commmitment to world-class data quality, engineering, and MLOPS by taking full advantage of the Databricks Data Intelligence Platform. From system tables to federated querying, find out how the Rangers use every tool at their disposal to stay one step ahead in the hyper competitive world of baseball.

HP's Data Platform Migration Journey: Redshift to Lakehouse

HP Print's data platform team took on a migration from a monolithic, shared resource of AWS Redshift, to a modular and scalable data ecosystem on Databricks lakehouse.​ The result was 30–40% cost savings, scalable and isolated resources for different data consumers and ETL workloads, and performance optimization for a variety of query types.​ Through this migration, there were technical challenges and learnings relating to the ETL migrations with DBT, new Databricks features like Liquid Clustering, predictive optimization, Photon, SQL serverless warehouses, managing multiple teams on Unity Catalog, and others.​ This presentation dives into both the business and technical sides of this migration. Come along as we share our key takeaways from this journey.​

Retail data is expanding at an unprecedented rate, demanding a scalable, cost-efficient, and near real-time architecture. At Unilever, we transformed our data management approach by leveraging Databricks Lakeflow Declarative Pipelines, achieving approximately $500K in cost savings while accelerating computation speeds by 200–500%.By adopting a streaming-driven architecture, we built a system where data flows continuously across processing layers, enabling real-time updates with minimal latency.Lakeflow Declarative Pipelines' serverless simplicity replaced complex-dependency management, reducing maintenance overhead, and improving pipeline reliability. Lakeflow Declarative Pipelines Direct Publishing further enhanced data segmentation, concurrency, and governance, ensuring efficient and scalable data operations while simplifying workflows.This transformation empowers Unilever to manage data with greater efficiency, scalability, and reduced costs, creating a future-ready infrastructure that evolves with the needs of our retail partners and customers.

Intuit's Privacy-Safe Lending Marketplace: Leveraging Databricks Clean Rooms

Intuit leverages Databricks Clean Rooms to create a secure, privacy-safe lending marketplace, enabling small business lending partners to perform analytics and deploy ML/AI workflows on sensitive data assets. This session explores the technical foundations of building isolated clean rooms across multiple partners and cloud providers, differentiating Databricks Clean Rooms from market alternatives. We'll demonstrate our automated approach to clean room lifecycle management using APIs, covering creation, collaborator onboarding, data asset sharing, workflow orchestration and activity auditing. The integration with Unity Catalog for managing clean room inputs and outputs will also be discussed. Attendees will gain insights into harnessing collaborative ML/AI potential, support various languages and workloads, and enable complex computations without compromising sensitive information in Clean Rooms.

MLOps That Ships: Accelerating AI Deployment at Vizient

Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.

Scaling Success: How Banks are Unlocking Growth With Data and AI

Growth in banking isn’t just about keeping pace—it’s about setting the pace. This session explores how leading banks leverage Databricks’ Data Intelligence Platform to uncover new revenue opportunities, deepen customer relationships, and expand market reach. Hear from industry leaders who have transformed their growth strategies by harnessing the power of advanced analytics and machine learning. Learn how personalized customer experiences, predictive insights and unified data platforms are driving innovation and helping banks scale faster than ever. Key takeaways: Proven strategies for identifying untapped growth opportunities using data-driven approaches Real-world examples of banks creating personalized customer journeys that boost retention and loyalty Tools and techniques to accelerate innovation while maintaining operational efficiency Join us in discovering how data intelligence is redefining growth in banking and thriving throughout uncertainty.

Schiphol Group’s Transformation to Unity Catalog

Discover how Europe’s third-busiest airport, Schiphol Group, is elevating its data operations by transitioning from a standard Databricks setup to the advanced capabilities of Unity Catalog. In this session, we will share the motivations, obstacles and strategic decisions behind executing a seamless migration in a large-scale environment — one that spans hundreds of workspaces and demands continuous availability. Gain insights into planning and governance, learn how to safeguard data integrity and maintain operational flow, and understand the process of integrating Unity Catalog’s enhanced security and governance features. Attendees will leave with practical lessons from our hands-on experience, proven methods for similar migrations, and a clear perspective on the benefits this transition offers for complex, rapidly evolving organizations.

Sponsored by: Accenture & Avanade | How data strategy powers mission-critical work at the Gates Foundation

There’s never been a more critical time to ensure data and analytics foundations can deliver the value and efficiency needed to accelerate and scale AI. What are the most difficult challenges that organizations face with data transformation, and what technologies, processes and decisions that overcome these barriers to success? Join this session featuring executives from the Gates Foundation, the nonprofit leading change in communities around the globe, and Avanade, the joint venture between Accenture and Microsoft, in a discussion about impactful data strategy. Learn about the Gates Foundation’s approach to its enterprise data platform to ensure trusted insights at the speed of today’s business. And we’ll share lessons learned from Avanade helping organizations around the globe build with Databricks and seize the AI opportunity.

Sponsored by: Deloitte | Analyzing Geospatial Data at Scale in Databricks for Environment & Agriculture

Analyzing geospatial data has become a cornerstone of tackling many of today’s pressing challenges from climate change to resource management. However, storing and processing such data can be complex and hard to scale using common GIS packages. This talk explores how Deloitte and Databricks enable horizontally scalable geospatial analysis using delta lake, H3 integration and support for geospatial vector and raster data. We demonstrate how we have leveraged these capabilities for real-world applications in environmental monitoring and agriculture. In doing so, we cover end-to-end processing from ingestion, transformation and analysis to production of geospatial data products accessible by scientists and decision makers through standard GIS tools.

Sponsored by: KPMG | Enhancing Regulatory Compliance through Data Quality and Traceability

In highly regulated industries like financial services, maintaining data quality is an ongoing challenge. Reactive measures often fail to prevent regulatory penalties, causing inaccuracies in reporting and inefficiencies due to poor data visibility. Regulators closely examine the origins and accuracy of reporting calculations to ensure compliance. A robust system for data quality and lineage is crucial. Organizations are utilizing Databricks to proactively improve data quality through rules-based and AI/ML-driven methods. This fosters complete visibility across IT, data management, and business operations, facilitating rapid issue resolution and continuous data quality enhancement. The outcome is quicker, more accurate, transparent financial reporting. We will detail a framework for data observability and offer practical examples of implementing quality checks throughout the data lifecycle, specifically focusing on creating data pipelines for regulatory reporting,

Sponsored by: LTIMindtree | 4 Strategies to Maximize SAP Data Value with Databricks and AI

As enterprises strive to become more data-driven, SAP continues to be central to their operational backbone. However, traditional SAP ecosystems often limit the potential of AI and advanced analytics due to fragmented architectures and legacy tools. In this session, we explore four strategic options for unlocking greater value from SAP data by integrating with Databricks and cloud-native platforms. Whether you're on ECC, S4HANA, or transitioning from BW, learn how to modernize your data landscape, enable real-time insights, and power AI/ML at scale. Discover how SAP Business Data Cloud and SAP Databricks can help you build a unified, future-ready data and analytics ecosystem—without compromising on scalability, flexibility, or cost-efficiency.

Stop Guessing Spend Where It Counts: Data-Driven Decisions for High-Impact Investments on Databricks

Struggling with runaway cloud costs as your organization grows? Join us for an inside look at how Databricks’ own Data Platform team tackled escalating spend in some of the world’s largest workspaces — saving millions of dollars without sacrificing performance or user experience. We’ll share how we harnessed powerful features like System Tables, Workflows, Unity Catalog, and Photon to monitor and optimize resource usage, all while using data-driven decisions to improve efficiency and ensure we invest in the areas that truly drive business impact. You’ll hear about the real-world challenges we faced balancing governance with velocity and discover the custom tooling and best practices we developed to keep costs in check. By the end of this session, you’ll walk away with a proven roadmap for leveraging Databricks to control cloud spend at scale.