talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

1286

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

1286 activities · Newest first

How the Texas Rangers Use a Unified Data Platform to Drive World Class Baseball Analytics

Don't miss this session where we demonstrate how the Texas Rangers baseball team is staying one step ahead of the competition by going back to the basics. After implementing a modern data strategy with Databricks and winnng the 2023 World Series the rest of the league quickly followed suit. Now more than ever, data and AI are a central pillar of every baseball team's strategy driving profound insights into player performance and game dynamics. With a 'fundamentals win games' back to the basics focus, join us as we explain our commmitment to world-class data quality, engineering, and MLOPS by taking full advantage of the Databricks Data Intelligence Platform. From system tables to federated querying, find out how the Rangers use every tool at their disposal to stay one step ahead in the hyper competitive world of baseball.

HP's Data Platform Migration Journey: Redshift to Lakehouse

HP Print's data platform team took on a migration from a monolithic, shared resource of AWS Redshift, to a modular and scalable data ecosystem on Databricks lakehouse.​ The result was 30–40% cost savings, scalable and isolated resources for different data consumers and ETL workloads, and performance optimization for a variety of query types.​ Through this migration, there were technical challenges and learnings relating to the ETL migrations with DBT, new Databricks features like Liquid Clustering, predictive optimization, Photon, SQL serverless warehouses, managing multiple teams on Unity Catalog, and others.​ This presentation dives into both the business and technical sides of this migration. Come along as we share our key takeaways from this journey.​

Retail data is expanding at an unprecedented rate, demanding a scalable, cost-efficient, and near real-time architecture. At Unilever, we transformed our data management approach by leveraging Databricks Lakeflow Declarative Pipelines, achieving approximately $500K in cost savings while accelerating computation speeds by 200–500%.By adopting a streaming-driven architecture, we built a system where data flows continuously across processing layers, enabling real-time updates with minimal latency.Lakeflow Declarative Pipelines' serverless simplicity replaced complex-dependency management, reducing maintenance overhead, and improving pipeline reliability. Lakeflow Declarative Pipelines Direct Publishing further enhanced data segmentation, concurrency, and governance, ensuring efficient and scalable data operations while simplifying workflows.This transformation empowers Unilever to manage data with greater efficiency, scalability, and reduced costs, creating a future-ready infrastructure that evolves with the needs of our retail partners and customers.

Intuit's Privacy-Safe Lending Marketplace: Leveraging Databricks Clean Rooms

Intuit leverages Databricks Clean Rooms to create a secure, privacy-safe lending marketplace, enabling small business lending partners to perform analytics and deploy ML/AI workflows on sensitive data assets. This session explores the technical foundations of building isolated clean rooms across multiple partners and cloud providers, differentiating Databricks Clean Rooms from market alternatives. We'll demonstrate our automated approach to clean room lifecycle management using APIs, covering creation, collaborator onboarding, data asset sharing, workflow orchestration and activity auditing. The integration with Unity Catalog for managing clean room inputs and outputs will also be discussed. Attendees will gain insights into harnessing collaborative ML/AI potential, support various languages and workloads, and enable complex computations without compromising sensitive information in Clean Rooms.

MLOps That Ships: Accelerating AI Deployment at Vizient

Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.

Scaling Success: How Banks are Unlocking Growth With Data and AI

Growth in banking isn’t just about keeping pace—it’s about setting the pace. This session explores how leading banks leverage Databricks’ Data Intelligence Platform to uncover new revenue opportunities, deepen customer relationships, and expand market reach. Hear from industry leaders who have transformed their growth strategies by harnessing the power of advanced analytics and machine learning. Learn how personalized customer experiences, predictive insights and unified data platforms are driving innovation and helping banks scale faster than ever. Key takeaways: Proven strategies for identifying untapped growth opportunities using data-driven approaches Real-world examples of banks creating personalized customer journeys that boost retention and loyalty Tools and techniques to accelerate innovation while maintaining operational efficiency Join us in discovering how data intelligence is redefining growth in banking and thriving throughout uncertainty.

Schiphol Group’s Transformation to Unity Catalog

Discover how Europe’s third-busiest airport, Schiphol Group, is elevating its data operations by transitioning from a standard Databricks setup to the advanced capabilities of Unity Catalog. In this session, we will share the motivations, obstacles and strategic decisions behind executing a seamless migration in a large-scale environment — one that spans hundreds of workspaces and demands continuous availability. Gain insights into planning and governance, learn how to safeguard data integrity and maintain operational flow, and understand the process of integrating Unity Catalog’s enhanced security and governance features. Attendees will leave with practical lessons from our hands-on experience, proven methods for similar migrations, and a clear perspective on the benefits this transition offers for complex, rapidly evolving organizations.

Sponsored by: Accenture & Avanade | How data strategy powers mission-critical work at the Gates Foundation

There’s never been a more critical time to ensure data and analytics foundations can deliver the value and efficiency needed to accelerate and scale AI. What are the most difficult challenges that organizations face with data transformation, and what technologies, processes and decisions that overcome these barriers to success? Join this session featuring executives from the Gates Foundation, the nonprofit leading change in communities around the globe, and Avanade, the joint venture between Accenture and Microsoft, in a discussion about impactful data strategy. Learn about the Gates Foundation’s approach to its enterprise data platform to ensure trusted insights at the speed of today’s business. And we’ll share lessons learned from Avanade helping organizations around the globe build with Databricks and seize the AI opportunity.

Sponsored by: Deloitte | Analyzing Geospatial Data at Scale in Databricks for Environment & Agriculture

Analyzing geospatial data has become a cornerstone of tackling many of today’s pressing challenges from climate change to resource management. However, storing and processing such data can be complex and hard to scale using common GIS packages. This talk explores how Deloitte and Databricks enable horizontally scalable geospatial analysis using delta lake, H3 integration and support for geospatial vector and raster data. We demonstrate how we have leveraged these capabilities for real-world applications in environmental monitoring and agriculture. In doing so, we cover end-to-end processing from ingestion, transformation and analysis to production of geospatial data products accessible by scientists and decision makers through standard GIS tools.

Sponsored by: KPMG | Enhancing Regulatory Compliance through Data Quality and Traceability

In highly regulated industries like financial services, maintaining data quality is an ongoing challenge. Reactive measures often fail to prevent regulatory penalties, causing inaccuracies in reporting and inefficiencies due to poor data visibility. Regulators closely examine the origins and accuracy of reporting calculations to ensure compliance. A robust system for data quality and lineage is crucial. Organizations are utilizing Databricks to proactively improve data quality through rules-based and AI/ML-driven methods. This fosters complete visibility across IT, data management, and business operations, facilitating rapid issue resolution and continuous data quality enhancement. The outcome is quicker, more accurate, transparent financial reporting. We will detail a framework for data observability and offer practical examples of implementing quality checks throughout the data lifecycle, specifically focusing on creating data pipelines for regulatory reporting,

Sponsored by: LTIMindtree | 4 Strategies to Maximize SAP Data Value with Databricks and AI

As enterprises strive to become more data-driven, SAP continues to be central to their operational backbone. However, traditional SAP ecosystems often limit the potential of AI and advanced analytics due to fragmented architectures and legacy tools. In this session, we explore four strategic options for unlocking greater value from SAP data by integrating with Databricks and cloud-native platforms. Whether you're on ECC, S4HANA, or transitioning from BW, learn how to modernize your data landscape, enable real-time insights, and power AI/ML at scale. Discover how SAP Business Data Cloud and SAP Databricks can help you build a unified, future-ready data and analytics ecosystem—without compromising on scalability, flexibility, or cost-efficiency.

Stop Guessing Spend Where It Counts: Data-Driven Decisions for High-Impact Investments on Databricks

Struggling with runaway cloud costs as your organization grows? Join us for an inside look at how Databricks’ own Data Platform team tackled escalating spend in some of the world’s largest workspaces — saving millions of dollars without sacrificing performance or user experience. We’ll share how we harnessed powerful features like System Tables, Workflows, Unity Catalog, and Photon to monitor and optimize resource usage, all while using data-driven decisions to improve efficiency and ensure we invest in the areas that truly drive business impact. You’ll hear about the real-world challenges we faced balancing governance with velocity and discover the custom tooling and best practices we developed to keep costs in check. By the end of this session, you’ll walk away with a proven roadmap for leveraging Databricks to control cloud spend at scale.

The Full Stack of Innovation: Building Data and AI Products With Databricks Apps

In this deep-dive technical session, Ivan Trusov (Sr. SSA @ Databricks) and Giran Moodley (SA @ Databricks) — will explore the full-stack development of Databricks Apps, covering everything from frameworks to deployment. We’ll walk through essential topics, including: Frameworks & tooling — Pythonic (Dash, Streamlit, Gradio) vs. JS + Python stack Development lifecycle — Debugging, issue resolution and best practices Testing — Unit, integration and load testing strategies CI/CD & deployment — Automating with Databricks Asset Bundles Monitoring & observability — OpenTelemetry, metrics collection and analysis Expect a highly practical session with several live demos, showcasing the development loop, testing workflows and CI/CD automation. Whether you’re building internal tools or AI-powered products, this talk will equip you with the knowledge to ship robust, scalable Databricks Apps.

Use External Models in Databricks: Connecting to Azure, AWS, Google Cloud, Anthropic and More

In this session you will learn how to leverage a wide set of GenAI models in Databricks, including external connections to cloud vendors and other model providers. We will cover establishing connection to externally served models, via Mosaic AI Gateway. This will showcase connection to Azure, AWS & Google Cloud models, as well as model vendors like Anthropic, Cohere, AI21 Labs and more. You will also discover best practices on model comparison, governance and cost control on those model deployments.

Building AI models of human cell: Tahoe Therapeutics on Databricks

Discover how Tahoe Therapeutics (formerly Vevo) is generating gigascale single-cell data that map how drugs interact with cells from cancer patients. They are using that to find better therapeutics, and to build AI models that can predict drug-patient interactions on Databricks. Their technology enabled the landmark Tahoe-100M atlas, the world’s largest dataset of drug responses-profiling 100 million cells across 60,000 conditions. Learn how we use Databricks to process this massive data, enabling AI models that predict drug efficacy and resistance at the cellular level. Recognized as the Grand Prize Winner of the Databricks Generative AI Startup Challenge, Tahoe sets a new standard for scalable, data-driven drug discovery.

How Serverless Empowered Nationwide to Build Cost-Efficient and World Class BI

Databricks’ Serverless compute streamlines infrastructure setup and management, delivering unparalleled performance and cost optimization for Data and BI workflows. In this presentation, we will explore how Nationwide is leveraging Databricks’ serverless technology and unified governance through Unity Catalog to build scalable, world-class BI solutions. Key features like AI/BI Dashboards, Genie, Materialized Views, Lakehouse Federation and Lakehouse Apps, all powered by serverless, have empowered business teams to deliver faster, scalable and smarter insights. We will show how Databricks’ serverless technology is enabling Nationwide to unlock new levels of efficiency and business impact, and how other organizations can adopt serverless technology to realize similar benefits.

Race to Real-Time: Low-Latency Streaming ETL Meets Next-Gen Databricks OLTP-DB

In today’s digital economy, real-time insights and rapid responsiveness are paramount to delivering exceptional user experiences and lowering TCO. In this session, discover a pioneering approach that leverages a low-latency streaming ETL pipeline built with Spark Structured Streaming and Databricks’ new OLTP-DB—a serverless, managed Postgres offering designed for transactional workloads. Validated in a live customer scenario, this architecture achieves sub-2 second end-to-end latency by seamlessly ingesting streaming data from Kinesis and merging it into OLTP-DB. This breakthrough not only enhances performance and scalability but also provides a replicable blueprint for transforming data pipelines across various verticals. Join us as we delve into the advanced optimization techniques and best practices that underpin this innovation, demonstrating how Databricks’ next-generation solutions can revolutionize real-time data processing and unlock a myriad of new use cases in data landscape.

Sponsored by: Cognizant | How Cognizant Helped RJR Transform Market Intelligence with GenAI

Cognizant developed a GenAI-driven market intelligence chatbot for RJR using Dash UI. This chatbot leverages Databricks Vector Search for vector embeddings and semantic search, along with the DBRX-Instruct LLM model to provide accurate and contextually relevant responses to user queries. The implementation involved loading prepared metadata into a Databricks vector database using the GTE model to create vector embeddings, indexing these embeddings for efficient semantic search, and integrating the DBRX-Instruct LLM into the chat system with prompts to guide the LLM in understanding and responding to user queries. The chatbot also generated responses containing URL links to dashboards with requested numerical values, enhancing user experience and productivity by reducing report navigation and discovery time by 30%. This project stands out due to its innovative AI application, advanced reasoning techniques, user-friendly interface, and seamless integration with MicroStrategy.