talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

1286

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

1286 activities · Newest first

Building Intelligent AI Agents With Claude Models and Databricks Mosaic AI Framework

This session is repeated. Explore how Anthropic's frontier models power AI agents in Databricks Mosaic AI Agent Framework. Learn to leverage Claude's state-of-the-art capabilities for complex agentic workflows while benefiting from Databricks unified governance, credential management and evaluation tools. We'll demonstrate how Anthropic's models integrate seamlessly to create production-ready applications that combine Claude's reasoning with Databricks data intelligence capabilities.

Comprehensive Guide to MLOps on Databricks

This in-depth session explores advanced MLOps practices for implementing production-grade machine learning workflows on Databricks. We'll examine the complete MLOps journey from foundational principles to sophisticated implementation patterns, covering essential tools including MLflow, Unity Catalog, Feature Stores and version control with Git. Dive into Databricks' latest MLOps capabilities including MLflow 3.0, which enhances the entire ML lifecycle from development to deployment with particular focus on generative AI applications. Key session takeaways include: Advanced MLflow 3.0 features for LLM management and deployment Enterprise-grade governance with Unity Catalog integration Robust promotion patterns across development, staging and production CI/CD pipeline automation for continuous deployment GenAI application evaluation and streamlined deployment

Databricks on Databricks: Transforming the Sales Experience using GenAI Agents at Scale

Databricks is transforming its sales experience with a GenAI agent — built and deployed entirely on Databricks — to automate tasks, streamline data retrieval, summarize content, and enable conversational AI for over 4,000 sellers. This agent leverages the AgentEval framework, AI Bricks, and Model Serving to process both structured and unstructured data within Databricks, unlocking deep sales insights. The agent seamlessly integrates across multiple data sources including Salesforce, Google Drive, and Glean securely via OAuth. This session includes a live demonstration and explores the business impact, architecture as well as agent development and evaluation strategies, providing a blueprint for deploying secure, scalable GenAI agents in large enterprises.

Defending Revenue With GenAI

Defending revenue is critical to any business strategy, and predicting customer churn is difficult. Until now. In this session, Blueprint will share how their clients use GenAI on Databricks to reduce customer churn, grow average revenue per user, and create overall revenue growth. This presentation will demonstrate how they helped a customer take a GenAI-powered personalization engine from proof-of-concept to production to improve customer churn propensity, personalized retention, and customer satisfaction. Learn how to turn your lakehouse from a cost center into a profit center.

Morgan Stanley, a highly regulated financial institution, needs to meet stringent security and regulatory requirements around data storage and processing. Traditionally, this has necessitated maintaining control over data and compute within their own accounts with the associated management overhead. In this session, we will cover how Morgan Stanley has partnered with Databricks on a fully-managed compute and storage solution that allows them to meet their regulatory obligations with significantly reduced effort. This innovative approach enables rapid onboarding of new projects onto the platform, improving operational efficiency while maintaining the highest levels of security and compliance.

Franchise IP and Data Governance at Krafton: Driving Cost Efficiency and Scalability

Join us as we explore how KRAFTON optimized data governance for PUBG IP, enhancing cost efficiency and scalability. KRAFTON operates a massive data ecosystem, processing tens of terabytes daily. As real-time analytics demands increased, traditional Batch-based processing faced scalability challenges. To address this, we redesigned data pipelines and governance models, improving performance while reducing costs. Transitioned to real-time pipelines (batch to streaming) Optimized workload management (reducing all-purpose clusters, increasing Jobs usage) Cut costs by tens of thousands monthly (up to 75%) Enhanced data storage efficiency (lower S3 costs, Delta Tables) Improved pipeline stability (Medallion Architecture) Gain insights into how KRAFTON scaled data operations, leveraging real-time analytics and cost optimization for high-traffic games. Learn more: https://www.databricks.com/customers/krafton

From Prediction to Prevention: Transforming Risk Management in Insurance

Protecting insurers against emerging threats is critical. This session reveals how leading companies use Databricks’ Data Intelligence Platform to transform risk management, enhance fraud detection, and ensure compliance. Learn how advanced analytics, AI, and machine learning process vast data in real time to identify risks and mitigate threats. Industry leaders will share strategies for building resilient operations that protect against financial losses and reputational harm. Key takeaways: AI-powered fraud prevention using anomaly detection and predictive analytics Real-time risk assessment models integrating IoT, behavioral, and external data Strategies for robust compliance and governance with operational efficiency Discover how data intelligence is revolutionizing insurance risk management and safeguarding the industry’s future.

This introductory workshop caters to data engineers seeking hands-on experience and data architects looking to deepen their knowledge. The workshop is structured to provide a solid understanding of the following data engineering and streaming concepts: Introduction to Lakeflow and the Data Intelligence Platform Getting started with Lakeflow Declarative Pipelines for declarative data pipelines in SQL using Streaming Tables and Materialized Views Mastering Databricks Workflows with advanced control flow and triggers Understanding serverless compute Data governance and lineage with Unity Catalog Generative AI for Data Engineers: Genie and Databricks Assistant We believe you can only become an expert if you work on real problems and gain hands-on experience. Therefore, we will equip you with your own lab environment in this workshop and guide you through practical exercises like using GitHub, ingesting data from various sources, creating batch and streaming data pipelines, and more.

Want to learn how to build your own custom data intelligence applications directly in Databricks? In this workshop, we’ll guide you through a hands-on tutorial for building a Streamlit web app that leverages many of the key products at Databricks as building blocks. You’ll integrate a live DB SQL warehouse, use Genie to ask questions in natural language, and embed AI/BI dashboards for interactive visualizations. In addition, we’ll discuss key concepts and best practices for building production-ready apps, including logging and observability, scalability, different authorization models, and deployment. By the end, you'll have a working AI app—and the skills to build more.

How Skyscanner Runs Real-Time AI at Scale with Databricks

Deploying AI in production is getting more complex — with different model types, tighter timelines, and growing infrastructure demands. In this session, we’ll walk through how Mosaic AI Model Serving helps teams deploy and scale both traditional ML and generative AI models efficiently, with built-in monitoring and governance.We’ll also hear from Skyscanner on how they’ve integrated AI into their products, scaled to 100+ production endpoints, and built the processes and team structures to support AI at scale. Key Takeaways: How Skyscanner ships and operates AI in real-world products How to deploy and scale a variety of models with low latency and minimal overhead Building compound AI systems using models, feature stores, and vector search Monitoring, debugging, and governing production workloads

Modernizing Critical Infrastructure: AI and Data-Driven Solutions in Nuclear and Utility Operations

This session showcases how both Westinghouse Electric and Alabama Power Company are leveraging cloud-based tools, advanced analytics, and machine learning to transform operational resilience across the energy sector. In the first segment, we'll explore AI's crucial role in enhancing safety, efficiency, and compliance in nuclear operations through technologies like HiVE and Bertha, focusing on the unique reliability and credibility requirements of the regulated nuclear industry. We’ll then highlight how Alabama Power Company has modernized its grid management and storm preparedness by using Databricks to develop SPEAR and RAMP—applications that combine real-time data and predictive analytics to improve reliability, efficiency, and customer service.

Retail Genie: No-Code AI Apps for Empowering BI Users to be Self-Sufficient

Explore how Databricks AI/BI Genie revolutionizes retail analytics, empowering business users to become self-reliant data explorers. This session highlights no-code AI apps that create a conversational interface for retail data analysis. Genie spaces harness NLP and generative AI to convert business questions into actionable insights, bypassing complex SQL queries. We'll showcase retail teams effortlessly analyzing sales trends, inventory and customer behavior through Genie's intuitive interface. Witness real-world examples of AI/BI Genie's adaptive learning, enhancing accuracy and relevance over time. Learn how this technology democratizes data access while maintaining governance via Unity Catalog integration. Discover Retail Genie's impact on decision-making, accelerating insights and cultivating a data-driven retail culture. Join us to see the future of accessible, intelligent retail analytics in action.

Revolutionizing Banking Data, Analytics and AI: Building an Enterprise Data Hub With Databricks

Explore the transformative journey of a regional bank as it modernizes its enterprise data infrastructure amidst the challenges of legacy systems and past mergers and acquisitions. The bank is creating an Enterprise Data Hub using Deloitte's industry experience and the Databricks Data Intelligence Platform to drive growth, efficiency and Large Financial Institution readiness needs. This session will showcase how the new data hub will be a one-stop-shop for LOB and enterprise needs, while unlocking the advanced analytics and GenAI possibilities. Discover how this initiative is going to empower the ambitions of a regional bank to realize their “big bank muscle, small bank hustle.”

Scaling Real-Time Fraud Detection With Databricks: Lessons From DraftKings

At DraftKings, ensuring secure, fair gaming requires detecting fraud in real time with both speed and precision. In this talk, we’ll share how Databricks powers our fraud detection pipeline, integrating real-time streaming, machine learning and rule-based detection within a PySpark framework. Our system enables rapid model training, real-time inference and seamless feature transformation across historical and live data. We use shadow mode to test models and rules in live environments before deployment. Collaborating with Databricks, we push online feature store performance and enhance real-time PySpark capabilities. We'll cover PySpark-based feature transformations, real-time inference, scaling challenges and our migration from a homegrown system to Databricks. This session is for data engineers and ML practitioners optimizing real-time AI workloads, featuring a deep dive, code snippets and lessons from building and scaling fraud detection.

Selectively Overwrite Data With Delta Lake’s Dynamic Insert Overwrite

Dynamic Insert Overwrite is an important Delta Lake feature that allows fine-grained updates by selectively overwriting specific rows, eliminating the need for full-table rewrites. For examples, this capability is essential for: DBT-Databricks' incremental models/workloads, enabling efficient data transformations by processing only new or updated records ETL Slowly Changing Dimension (SCD) Type 2 In this lightning talk, we will: Introduce Dynamic Insert Overwrite: Understand its functionality and how it works Explore key use cases: Learn how it optimizes performance and reduces costs Share best practices: Discover practical tips for leveraging this feature on Databricks, including on the cutting-edge Serverless SQL Warehouses

Self-Service Assortment and Space Analytics at Walmart Scale

Assortment and space analytics optimizes product selection and shelf allocation to boost sales, improve inventory management and enhance customer experience. However, challenges like evolving demand, data accuracy and operational alignment hinder success. Older approaches struggled due to siloed tools, slow performance and poor governance. Databricks unified platform resolved these issues, enabling seamless data integration, high-performance analytics and governed sharing. The innovative AI/BI Genie interface empowered self-service analytics, driving non-technical user adoption. This solution helped Walmart cut time to value by 90% and saved $5.6M annually in FTE hours leading to increased productivity. Looking ahead, AI agents will let store managers and merchants execute decisions via conversational interfaces, streamlining operations and enhancing accessibility. This transformation positions retailers to thrive in a competitive, customer-centric market.

Sponsored by: AWS | Buy With AWS Marketplace

AWS Marketplace is revolutionizing how enterprises worldwide discover, procure, and manage their software solutions. With access to over 5,000+ verified sellers offering software, data, and professional services - including industry leaders like Databricks - organizations can streamline procurement through flexible pricing models and simplified terms. The platform seamlessly integrates with AWS services while providing consolidated billing, centralized governance, and streamlined vendor management. Through innovations like Buy with AWS, customers can purchase directly from Partner websites, making software acquisition more efficient than ever. Join us to learn how AWS Marketplace is driving value for both customers and Partners, helping organizations accelerate their digital transformation while maintaining security and compliance.

Sponsored by: AWS | Ripple: Well-Architected Data & AI Platforms - AWS and Databricks in Harmony

Join us as we explore the well-architected framework for modern data lakehouse architecture, where AWS's comprehensive data, AI, and infrastructure capabilities align with Databricks' unified platform approach. Building upon core principles of Operational Excellence, Security, Reliability, Performance, and Cost Optimization, we'll demonstrate how Data and AI Governance alongside Interoperability and Usability enable organizations to build robust, scalable platforms. Learn how Ripple modernized its data infrastructure by migrating from a legacy Hadoop system to a scalable, real-time analytics platform using Databricks on AWS. This session covers the challenges of high operational costs, latency, and peak-time bottlenecks—and how Ripple achieved 80% cost savings and 55% performance improvements with Photon, Graviton, Delta Lake, and Structured Streaming.

Discover how SAP Business Data Cloud and Databricks can transform your business by unifying SAP and non-SAP data for advanced analytics and AI. In this session, we’ll highlight Optimizing Cash Flow with AI with integrated diverse data sources and AI algorithms that enable accurate cash flow forecasting to help businesses identify trends, prevent bottlenecks, and improve liquidity. You’ll also learn about the importance of high-quality, well-governed data as the foundation for reliable AI models and actionable reporting. Key Takeaways: • How to integrate and leverage SAP and external data in Databricks • Using AI for predictive analytics and better decision-making • Building a trusted data foundation to drive business performance Leave this session with actionable strategies to optimize your data, enhance efficiency, and unlock new growth opportunities.

Sponsored by: Capital One Software | How Capital One Uses Tokenization to Protect Data

Modern companies are managing more data than ever before, and the need to derive value from that data is becoming more urgent with AI. But AI adoption is often limited due to data security challenges, and adding to this complexity is the need to remain compliant with evolving regulation. At Capital One, we’ve deployed tokenization to further secure our data without compromising performance. In this talk, we’ll discuss lessons learned from our tokenization journey and show how companies can tokenize the data in their Databricks environment.