talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

Sponsored by: LTIMindtree | 4 Strategies to Maximize SAP Data Value with Databricks and AI

As enterprises strive to become more data-driven, SAP continues to be central to their operational backbone. However, traditional SAP ecosystems often limit the potential of AI and advanced analytics due to fragmented architectures and legacy tools. In this session, we explore four strategic options for unlocking greater value from SAP data by integrating with Databricks and cloud-native platforms. Whether you're on ECC, S4HANA, or transitioning from BW, learn how to modernize your data landscape, enable real-time insights, and power AI/ML at scale. Discover how SAP Business Data Cloud and SAP Databricks can help you build a unified, future-ready data and analytics ecosystem—without compromising on scalability, flexibility, or cost-efficiency.

Stop Guessing Spend Where It Counts: Data-Driven Decisions for High-Impact Investments on Databricks

Struggling with runaway cloud costs as your organization grows? Join us for an inside look at how Databricks’ own Data Platform team tackled escalating spend in some of the world’s largest workspaces — saving millions of dollars without sacrificing performance or user experience. We’ll share how we harnessed powerful features like System Tables, Workflows, Unity Catalog, and Photon to monitor and optimize resource usage, all while using data-driven decisions to improve efficiency and ensure we invest in the areas that truly drive business impact. You’ll hear about the real-world challenges we faced balancing governance with velocity and discover the custom tooling and best practices we developed to keep costs in check. By the end of this session, you’ll walk away with a proven roadmap for leveraging Databricks to control cloud spend at scale.

Use External Models in Databricks: Connecting to Azure, AWS, Google Cloud, Anthropic and More

In this session you will learn how to leverage a wide set of GenAI models in Databricks, including external connections to cloud vendors and other model providers. We will cover establishing connection to externally served models, via Mosaic AI Gateway. This will showcase connection to Azure, AWS & Google Cloud models, as well as model vendors like Anthropic, Cohere, AI21 Labs and more. You will also discover best practices on model comparison, governance and cost control on those model deployments.

Unlock the Potential of Your Enterprise Data With Zero-Copy Data Sharing, featuring SAP and Salesforce

Tired of data silos and the constant need to move copies of your data across different systems? Imagine a world where all your enterprise data is readily available in Databricks without the cost and complexity of duplication and ingestion. Our vision is to break down these silos by enabling seamless, zero-copy data sharing across platforms, clouds, and regions. This unlocks the true potential of your data for analytics and AI, empowering you to make faster, more informed decisions leveraging your most important enterprise data sets. This session you will hear from Databricks, SAP, and Salesforce product leaders on how zero-copy data sharing can unlock the value of enterprise data. Explore how Delta Sharing makes this vision a reality, providing secure, zero-copy data access for enterprises.SAP Business Data Cloud: See Delta Sharing in action to unlock operational reporting, supply chain optimization, and financial planning. Salesforce Data Cloud: Enable customer analytics, churn prediction, and personalized marketing.

Streamlining AI Application Development With Databricks Apps

Think Databricks is just for data and models? Think again. In this session, you’ll see how to build and scale a full-stack AI app capable of handling thousands of queries per second entirely on Databricks. No extra cloud platforms, no patchwork infrastructure. Just one unified platform with native hosting, LLM integration, secure access, and built-in CI/CD. Learn how Databricks Apps, along with services like Model Serving, Jobs, and Gateways, streamline your architecture, eliminate boilerplate, and accelerate development, from prototype to production.

SAP and Databricks: Building Your Lakehouse Reference Architecture

SAP is the world's 3rd-largest publicly traded software company by revenue, and recently launched the joint SAP Databricks "Business Data Cloud". See how it all works from a practitioner's perspective, including reference architecture, demo, and example customers. See firsthand how the powerful suite of SAP applications benefits from a joint Databricks solution - with data being more easily governed, discovered, shared, and used for AI/ML..

Deploying Unity Catalog OSS on Kubernetes: Simplifying Infrastructure Management

In modern data infrastructure, efficient and scalable data governance is essential for ensuring security, compliance, and accessibility. This session explores how to deploy Unity Catalog OSS on Kubernetes, leveraging its cloud-agnostic nature and efficient resource management. Helm makes Unity Catalog deployment simple and easy by providing a simplified installation process, easy configuration and credentials management.The session will cover why Kubernetes is the ideal platform, provide a technical breakdown of Unity Catalog on Kubernetes, and include a live showcase of its seamless deployment process. By the end, participants will confidently configure and deploy Unity Catalog OSS in their preferred Kubernetes environment and integrate it into their existing infrastructure.

This hands-on lab guides participants through the complete customer data analytics journey on Databricks, leveraging leading partner solutions - Fivetran, dbt Cloud, and Sigma. Attendees will learn how to:- Seamlessly connect to Fivetran, dbt Cloud, and Sigma using Databricks Partner Connect- Ingest data using Fivetran, transform and model data with dbt Cloud, and create interactive dashboards in Sigma, all on top of the Databricks Data Intelligence Platform- Empower teams to make faster, data-driven decisions by streamlining the entire analytics workflow using an integrated, scalable, and user-friendly platform

Most organizations run complex cloud data architectures that silo applications, users and data. Join this interactive hands-on workshop to learn how Databricks SQL allows you to operate a multi-cloud lakehouse architecture that delivers data warehouse performance at data lake economics — with up to 12x better price/performance than traditional cloud data warehouses.Here’s what we’ll cover: How Databricks SQL fits in the Data Intelligence Platform, enabling you to operate a multicloud lakehouse architecture that delivers data warehouse performance at data lake economics How to manage and monitor compute resources, data access and users across your lakehouse infrastructure How to query directly on your data lake using your tools of choice or the built-in SQL editor and visualizations How to use AI to increase productivity when querying, completing code or building dashboards Ask your questions during this hands-on lab, and the Databricks experts will guide you.

Lessons Learned: Building a Scalable Game Analytics Platform at Netflix

Over the past three years, Netflix has built a catalog of 100+ mobile and cloud games across TV, mobile and web platforms. With both internal and external studios contributing to this diverse ecosystem, building a robust game analytics platform became crucial for gaining insights into player behavior, optimizing game performance and driving member engagement.In this talk, we’ll share our journey of building Netflix’s Game Analytics platform from the ground up. We’ll highlight key decisions around data strategy, such as whether to develop an in-house solution or adopt an external service. We’ll discuss the challenges of balancing developer autonomy with data integrity and the complexities of managing data contracts for custom game telemetry, with an emphasis on self-service analytics. Attendees will learn how the Games Data team navigated these challenges, the lessons learned and the trade-offs involved in building a multi-tenant data ecosystem that supports diverse stakeholders.

Sponsored by: Google Cloud | Unlock price-performance and efficiency on Google Cloud: Databricks & Axion in Action

Maximize the performance of your Databricks Platform with innovations on Google Cloud. Discover how Google's Arm-based Axion C4A virtual machines (VMs) deliver breakthrough price-performance and efficiency for Databricks, supercharging Databricks Photon engine. Gain actionable strategies to optimize your Databricks deployments on Google Cloud.

Unity Catalog Deep Dive: Practitioner's Guide to Best Practices and Patterns

Join this deep dive session for practitioners on Unity Catalog, Databricks’ unified data governance solution, to explore its capabilities for managing data and AI assets across workflows. Unity Catalog provides fine-grained access control, automated lineage tracking, quality monitoring and policy enforcement and observability at scale. Whether your focus is data pipelines, analytics or machine learning and generative AI workflows, this session offers actionable insights on leveraging Unity Catalog’s open interoperability across tools and platforms to boost productivity and drive innovation. Learn governance best practices, including catalog configurations, access strategies for collaboration and controls for securing sensitive data. Additionally, discover how to design effective multi-cloud and multi-region deployments to ensure global compliance.

Sponsored by: Domo, Inc | Enabling AI-Powered Business Solutions w/Databricks & Domo

Domo's Databricks integration seamlessly connects business users to both Delta Lake data and AI/ML models, eliminating technical barriers while maximizing performance. Domo's Cloud Amplifier optimizes data processing through pushdown SQL, while the Domo AI Services layer enables anyone to leverage both traditional ML and large language models directly from Domo. During this session, we’ll explore an AI solution around fraud detection to demonstrate the power of leveraging Domo on Databricks.

Unity Catalog Implementation & Evolution at Edward Jones

This presentation outlines the evolution of Databricks and its integration with cloud analytics at Edward Jones. It focuses on the transition from Cloud V1.x to Cloud V2.0, which highlights the challenges faced with initial setup, Unity Catalog implementation and the improvements planned for the future particularly in terms of Data Cataloging, Architecture and Disaster Recovery. Highlights: Cloud Analytics Journey Current Setup (Cloud V1.x) Utilizes Medallion architecture customized to Edward Jones need. Challenges & limitations identified with integration, limited catalogs, Disaster Recovery etc. Cloud V2.0 Enhancements Modifications in storage and compute in Medallion layers Next level integration with enterprise suites Disaster Recovery readiness Future outlook

Breaking Silos: Using SAP Business Data Cloud and Delta Sharing for Seamless Access to SAP Data in Databricks

We’re excited to share with you how SAP Business Data Cloud supports Delta Sharing to share SAP data securely and seamlessly with Databricks—no complex ETL or data duplication required. This enables organizations to securely share SAP data for analytics and AI in Databricks while also supporting bidirectional data sharing back to SAP.In this session, we’ll demonstrate the integration in action, followed by a discussion of how the global beauty group, Natura, will leverage this solution. Whether you’re looking to bring SAP data into Databricks for advanced analytics or build AI models on top of trusted SAP datasets, this session will show you how to get started — securely and efficiently.

Cross-Cloud Data Mesh with Delta Sharing and UniForm in Mercedes-Benz

In this presentation, we'll show how we achieved a unified development experience for teams working on Mercedes-Benz Data Platforms in AWS and Azure. We will demonstrate how we implemented Azure to AWS and AWS to Azure data product sharing (using Delta Sharing and Cloud Tokens), integration with AWS Glue Iceberg tables through UniForm and automation to drive everything using Azure DevOps Pipelines and DABs. We will also show how to monitor and track cloud egress costs and how we present a consolidated view of all the data products and relevant cost information. The end goal is to show how customers can offer the same user experience to their engineers and not have to worry about which cloud or region the Data Product lives in. Instead, they can enroll in the data product through self-service and have it available to them in minutes, regardless of where it originates.

Smashing Silos, Shaping the Future: Data for All in the Next-Gen Ecosystem

A successful data strategy requires the right platform and the ability to empower the broader user community by creating simple, scalable and secure patterns that lower the barrier to entry while ensuring robust data practices. Guided by the belief that everyone is a data person, we focus on breaking down silos, democratizing access and enabling distributed teams to contribute through a federated "data-as-a-product" model. We’ll share the impact and lessons learned in creating a single source of truth on Unity Catalog, consolidated from diverse sources and cloud platforms. We’ll discuss how we streamlined governance with Databricks Apps, Workflows and native capabilities, ensuring compliance without hindering innovation. We’ll also cover how we maximize the value of that catalog by leveraging semantics to enable trustworthy, AI-driven self-service in AI/BI dashboards and downstream apps. Come learn how we built a next-gen data ecosystem that empowers everyone to be a data person.

Sponsored by: Genpact | Powering Change at GE Vernova: Inside One of the World’s Largest Databricks Migrations

How do you transform legacy data into a launchpad for next-gen innovation? GE Vernova is tackling it by rapidly migrating from outdated platforms to Databricks, building one of the world’s largest cloud data implementations. This overhaul wasn’t optional. Scaling AI, cutting technical debt, and slashing license costs demanded a bold, accelerated approach. Led by strategic decisions from the CDO and powered by Genpact’s AI Gigafactory, the migration is tackling 35+ Business and sub domains, 60,000+ data objects, 15,000+ jobs, 3000+ reports from 120+ diverse data sources to deliver a multi-tenant platform with unified governance. The anticipated results? Faster insights, seamless data sharing, and a standardized platform built for AI at scale. This session explores how Genpact and Databricks are fueling GE Vernova’s mission to deliver The Energy to Change the World—and what it takes to get there when speed, scale, and complexity are non-negotiable.