talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Supercharge Your Enterprise BI: A Practitioner’s Guide for Migrating to AI/BI

Are you striving to build a data-driven culture while managing costs and reducing reporting latency? Are your BI operations bogged down by complex data movements rather than delivering insights? Databricks IT faced these challenges in 2024 and embarked on an ambitious journey to make Databricks AI/BI our enterprise-wide reporting platform. In just two quarters, we migrated 2,000 dashboards from a traditional BI tool — without disrupting business operations. We’ll share how we executed this large-scale transition cost-effectively, ensuring seamless change management and empowering non-technical users to leverage AI/BI. You’ll gain insights into: Key migration strategies that minimized disruption and optimized efficiency Best practices for user adoption and training to drive self-service analytics Measuring success with clear adoption metrics and business impact Join us to learn how your organization can achieve the same transformation with AI-powered enterprise reporting.

Unifying GTM Analytics: The Strategic Shift to Native Analytics and AI/BI Dashboards at Databricks

The GTM team at Databricks recently launched the GTM Analytics Hub—a native AI/BI platform designed to centralize reporting, streamline insights, and deliver personalized dashboards based on user roles and business needs. Databricks Apps also played a crucial role in this integration by embedding AI/BI Dashboards directly into internal tools and applications, streamlining access to insights without disrupting workflows. This seamless embedding capability allows users to interact with dashboards within their existing platforms, enhancing productivity and collaboration. Furthermore, AI/BI Dashboards leverage Databricks' unified data and governance framework. Join us to learn how we’re using Databricks to build for Databricks—transforming GTM analytics with AI/BI Dashboards, and what it takes to drive scalable, user-centric analytics adoption across the business.

Unlock the Potential of Your Enterprise Data With Zero-Copy Data Sharing, featuring SAP and Salesforce

Tired of data silos and the constant need to move copies of your data across different systems? Imagine a world where all your enterprise data is readily available in Databricks without the cost and complexity of duplication and ingestion. Our vision is to break down these silos by enabling seamless, zero-copy data sharing across platforms, clouds, and regions. This unlocks the true potential of your data for analytics and AI, empowering you to make faster, more informed decisions leveraging your most important enterprise data sets. This session you will hear from Databricks, SAP, and Salesforce product leaders on how zero-copy data sharing can unlock the value of enterprise data. Explore how Delta Sharing makes this vision a reality, providing secure, zero-copy data access for enterprises.SAP Business Data Cloud: See Delta Sharing in action to unlock operational reporting, supply chain optimization, and financial planning. Salesforce Data Cloud: Enable customer analytics, churn prediction, and personalized marketing.

Sponsored by: Prophecy | Ready for GenAI? Survey Says Governed Self-Service Is the New Playbook for Data Teams

Are data teams ready for AI? Prophecy’s exclusive survey, “The Impact of GenAI on Data Teams”, gives the clearest picture yet of GenAI’s potential in data management, and what’s standing in the way. The top two obstacles? Poor governance and slow access to high-quality data. The message is clear: Modernizing your data platform with Databricks is essential. But it’s only the beginning. To unlock the power of AI and analytics, organizations must deliver governed, self-service access to clean, trusted data. Traditional data prep tools introduce risks around security, quality, and cost. It’s no wonder data leaders cited data transformation as the area where GenAI will make the biggest impact. To deliver what’s needed teams need a shift to governed self-service. Data analysts and scientists move fast while staying within IT’s guardrails. Join us to learn more details from the survey and how leading organizations are ahead of the curve, using GenAI to reshape how data gets done.

Join us to see how the powerful combination of ThoughtSpot's agentic analytics platform and the Databricks Data Intelligence Platform is changing the game for data-driven organizations. We'll demonstrate how DataSpot breaks down technical barriers to insight. You'll learn how to get trusted, real-time answers thanks to the seamless integration between ThoughtSpot's semantic layer and Databricks Unity Catalog. This session is for anyone looking to leverage data more effectively, whether you're a business leader seeking AI-driven insights, a data scientist building models in Python, or a product owner creating intelligent applications.

Adobe’s Security Lakehouse: OCSF, Data Efficiency and Threat Detection at Scale

This session will explore how Adobe uses a sophisticated data security architecture built on the Databricks Data Intelligence Platform, along with the Open Cybersecurity Schema Framework (OCSF), to enable scalable, real-time threat detection across more than 10 PB of security data. We’ll compare different approaches to OCSF implementation and demonstrate how Adobe processes massive security datasets efficiently — reducing query times by 18%, maintaining 99.4% SLA compliance, and supporting 286 security users across 17 teams with over 4,500 daily queries. By using Databricks' Platform for serverless compute, scalable architecture, and LLM-powered recommendations, Adobe has significantly improved processing speed and efficiency, resulting in substantial cost savings. We’ll also highlight how OCSF enables advanced cross-tool analytics and automation, streamlining investigations. Finally, we’ll introduce Databricks’ new open-source OCSF toolkit for scalable security data normalization and invite the community to contribute.

Build AI-Powered Applications Natively on Databricks

Discover how to build and deploy AI-powered applications natively on the Databricks Data Intelligence Platform. This session introduces best practices and a standard reference architecture for developing production-ready apps using popular frameworks like Dash, Shiny, Gradio, Streamlit and Flask. Learn how to leverage agents for orchestration and explore primary use cases supported by Databricks Apps, including data visualization, AI applications, self-service analytics and data quality monitoring. With serverless deployment and built-in governance through Unity Catalog, Databricks Apps enables seamless integration with your data and AI models, allowing you to focus on delivering impactful solutions without the complexities of infrastructure management. Whether you're a data engineer or an app developer, this session will equip you with the knowledge to create secure, scalable and efficient applications within a Databricks environment.

This hands-on lab guides participants through the complete customer data analytics journey on Databricks, leveraging leading partner solutions - Fivetran, dbt Cloud, and Sigma. Attendees will learn how to:- Seamlessly connect to Fivetran, dbt Cloud, and Sigma using Databricks Partner Connect- Ingest data using Fivetran, transform and model data with dbt Cloud, and create interactive dashboards in Sigma, all on top of the Databricks Data Intelligence Platform- Empower teams to make faster, data-driven decisions by streamlining the entire analytics workflow using an integrated, scalable, and user-friendly platform

Most organizations run complex cloud data architectures that silo applications, users and data. Join this interactive hands-on workshop to learn how Databricks SQL allows you to operate a multi-cloud lakehouse architecture that delivers data warehouse performance at data lake economics — with up to 12x better price/performance than traditional cloud data warehouses.Here’s what we’ll cover: How Databricks SQL fits in the Data Intelligence Platform, enabling you to operate a multicloud lakehouse architecture that delivers data warehouse performance at data lake economics How to manage and monitor compute resources, data access and users across your lakehouse infrastructure How to query directly on your data lake using your tools of choice or the built-in SQL editor and visualizations How to use AI to increase productivity when querying, completing code or building dashboards Ask your questions during this hands-on lab, and the Databricks experts will guide you.

Data is the backbone of modern decision-making, but centralizing it is only the tip of the iceberg. Entitlements, secure sharing and just-in-time availability are critical challenges to any large-scale platform. Join Goldman Sachs as we reveal how our Legend Lakehouse, coupled with Databricks, overcomes these hurdles to deliver high-quality, governed data at scale. By leveraging an open table format (Apache Iceberg) and open catalog format (Unity Catalog), we ensure platform interoperability and vendor neutrality. Databricks Unity Catalog then provides a robust entitlement system that aligns with our data contracts, ensuring consistent access control across producer and consumer workspaces. Finally, Legend functions, integrating with Databricks User Defined Functions (UDF), offer real-time data enrichment and secure transformations without exposing raw datasets. Discover how these components unite to streamline analytics, bolster governance and power innovation.

Lessons Learned: Building a Scalable Game Analytics Platform at Netflix

Over the past three years, Netflix has built a catalog of 100+ mobile and cloud games across TV, mobile and web platforms. With both internal and external studios contributing to this diverse ecosystem, building a robust game analytics platform became crucial for gaining insights into player behavior, optimizing game performance and driving member engagement.In this talk, we’ll share our journey of building Netflix’s Game Analytics platform from the ground up. We’ll highlight key decisions around data strategy, such as whether to develop an in-house solution or adopt an external service. We’ll discuss the challenges of balancing developer autonomy with data integrity and the complexities of managing data contracts for custom game telemetry, with an emphasis on self-service analytics. Attendees will learn how the Games Data team navigated these challenges, the lessons learned and the trade-offs involved in building a multi-tenant data ecosystem that supports diverse stakeholders.

Payer Digital Transformation: The Impact of Data + AI

Payer organizations are rapidly embracing digital transformation, leveraging data and AI to drive operational efficiency, improve member experiences and enhance decision-making. This session explores how advanced analytics, robust data governance and AI-powered insights are enabling payers to streamline claims processing, personalize member engagement, manage pharmacy operations, and optimize care management. Thought leaders will share real-world examples of data-driven innovation, discuss strategies for overcoming interoperability and privacy challenges, and highlight the future potential of AI in reshaping the payer landscape.

Revolutionizing PepsiCo BI Capabilities: From Traditional BI to Next-Gen Analytics Powerhouse

This session will provide an in-depth overview of how PepsiCo, a global leader in food and beverage, transformed its outdated data platform into a modern, unified and centralized data and AI-enabled platform using the Databricks SQL serverless environment. Through three distinct implementations that transpired at PepsiCo in 2024, we will demonstrate how the PepsiCo Data Analytics & AI Group unlocked pivotal capabilities that facilitated the delivery of diverse data-driven insights to the business, reduced operational expenses and enhanced overall performance through the newly implemented platform.

Take it to the Limit: Art of the Possible in AI/BI

Think you know everything AI/BI can do? Think again. This session explores the art of the possible with Databricks AI/BI Dashboards and Genie, going beyond traditional analytics to unleash the full power of the lakehouse. From incorporating AI into dashboards to handling large-scale data with ease to delivering insights seamlessly to end users — we’ll showcase creative approaches that unlock insights and real business outcomes. Perfect for adventurous data professionals looking to push limits and think outside the box.

Transforming Bio-Pharma Manufacturing: Eli Lilly's Data-Driven Journey With Databricks

Eli Lilly and Company, a leading bio-pharma company, is revolutionizing manufacturing with next-gen fully digital sites. Lilly and Tredence have partnered to establish a Databricks-powered Global Manufacturing Data Fabric (GMDF), laying the groundwork for transformative data products used by various personas at sites and globally. By integrating data from various manufacturing systems into a unified data model, GMDF has delivered actionable insights across several use cases such as batch release by exception, predictive maintenance, anomaly detection, process optimization and more. Our serverless architecture leverages Databricks Auto Loader for real-time data streaming, PySpark for automation and Unity Catalog for governance, ensuring seamless data processing and optimization. This platform is the foundation for data driven processes, self-service analytics, AI and more. This session will provide details on the data architecture and strategy and share a few use cases delivered.

Unity Catalog Deep Dive: Practitioner's Guide to Best Practices and Patterns

Join this deep dive session for practitioners on Unity Catalog, Databricks’ unified data governance solution, to explore its capabilities for managing data and AI assets across workflows. Unity Catalog provides fine-grained access control, automated lineage tracking, quality monitoring and policy enforcement and observability at scale. Whether your focus is data pipelines, analytics or machine learning and generative AI workflows, this session offers actionable insights on leveraging Unity Catalog’s open interoperability across tools and platforms to boost productivity and drive innovation. Learn governance best practices, including catalog configurations, access strategies for collaboration and controls for securing sensitive data. Additionally, discover how to design effective multi-cloud and multi-region deployments to ensure global compliance.

AI for BI without the BS

Stuck on a treadmill of endless report building requests? Wondering how you can ship reliable AI products to internal users and even customers? Omni is a BI and embedded analytics platform on Databricks that lets users answer their own data questions – sometimes with a little AI help. No magic, no miracles – just smart tooling that cuts through the noise and leverages well-known concepts (semantic layer, anyone?) to improve accuracy and delight users. This talk is your blueprint for getting reliable AI use cases into production and reaching the promised land of contagious self-service.

Sponsored by: Deloitte | Transforming Nestlé USA’s (NUSA) data platform to unlock new analytics and GenAI capabilities

Nestlé USA, a division of the world’s largest food and beverage company, Nestlé S.A., has embarked on a transformative journey to unlock GenAI capabilities on their data platform. Deloitte, Databricks, and Nestlé have collaborated on a data platform modernization program to address gaps associated with Nestlé’s existing data platform. This joint effort introduces new possibilities and capabilities, ranging from development of advanced machine learning models, implementing Unity Catalog, and adopting Lakehouse Federation, all while adhering to confidentiality protocols. With help from Deloitte and Databricks, Nestlé USA is now able to meet its advanced enterprise analytics and AI needs with the Databricks Data Intelligence Platform.

Sponsored by: Sigma | Trading Spreadsheets for Speed: TradeStation’s Self-Service Revolution

To meet the growing internal demand for accessible, reliable data, TradeStation migrated from fragmented, spreadsheet-driven workflows to a scalable, self-service analytics framework powered by Sigma on Databricks. This transition enabled business and technical users alike to interact with governed data models directly on the lakehouse, eliminating data silos and manual reporting overhead. In brokerage trading operations, the integration supports robust risk management, automates key operational workflows, and centralizes collaboration across teams. By leveraging Sigma’s intuitive interface on top of Databricks’ scalable compute and unified data architecture, TradeStation has accelerated time-to-insight, improved reporting consistency, and empowered teams to operationalize data-driven decisions at scale.

Unity Catalog Implementation & Evolution at Edward Jones

This presentation outlines the evolution of Databricks and its integration with cloud analytics at Edward Jones. It focuses on the transition from Cloud V1.x to Cloud V2.0, which highlights the challenges faced with initial setup, Unity Catalog implementation and the improvements planned for the future particularly in terms of Data Cataloging, Architecture and Disaster Recovery. Highlights: Cloud Analytics Journey Current Setup (Cloud V1.x) Utilizes Medallion architecture customized to Edward Jones need. Challenges & limitations identified with integration, limited catalogs, Disaster Recovery etc. Cloud V2.0 Enhancements Modifications in storage and compute in Medallion layers Next level integration with enterprise suites Disaster Recovery readiness Future outlook