talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

509

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Data + AI Summit 2025 ×
How We Turned 200+ Business Users Into Analysts With AI/BI Genie

AI/BI Genie has transformed self-service analytics for the Databricks Marketing team. This user-friendly conversational AI tool empowers marketers to perform advanced data analysis using natural language — no SQL required. By reducing reliance on data teams, Genie increases productivity and enables faster, data-driven decisions across the organization. But realizing Genie’s full potential takes more than just turning it on. In this session, we’ll share the end-to-end journey of implementing Genie for over 200 marketing users, including lessons learned, best practices and the real business impact of this Databricks-on-Databricks solution. Learn how Genie democratizes data access, enhances insight generation and streamlines decision-making at scale.

Intelligent Document Processing: Building AI, BI, and Analytics Systems on Unstructured Data

Most enterprise data is trapped in unstructured formats — documents, PDFs, scanned images and tables — making it difficult to access, analyze and use. This session shows how to unlock that hidden value by building intelligent document processing workflows on the Databricks Data Intelligence Platform. You’ll learn how to ingest unstructured content using Lakeflow Connect, extract structured data with AI Parse — even from complex tables and scanned documents — and apply analytics or AI to this newly structured data. What you’ll learn: How to build scalable pipelines that transform unstructured documents into structured tables Techniques for automating document workflows with Databricks tools Strategies for maintaining quality and governance with Unity Catalog Real-world examples of AI applications built with intelligent document processing

Introducing Lakeflow: The Future of Data Engineering on Databricks

Join us to explore Lakeflow, Databricks' end-to-end solution for simplifying and unifying the most complex data engineering workflows. This session builds on keynote announcements, offering an accessible introduction for newcomers while emphasizing the transformative value Lakeflow delivers.We’ll cover: What is Lakeflow? – A cohesive overview of its components: Lakeflow Connect, Lakeflow Declarative Pipelines, and Lakeflow Jobs. Core Capabilities in Action – Live demos showcasing no-code data ingestion, code-optional declarative pipelines, and unified, end-to-end orchestration. Vision for the Future – Unveil the roadmap, introducing no-code and open-source initiatives. Discover how Lakeflow equips data teams with a seamless experience for ingestion, transformation, and orchestration, reducing complexity and driving productivity. By unifying these capabilities, Lakeflow lays the groundwork for scalable, reliable, efficient data pipelines in a governed and high-performing environment.

Managing Databricks at Scale

T-Mobile’s leadership in 5G innovation and its rapid growth in the fixed wireless business have led to an exponential increase in data, reaching 100s of terabytes daily. This session explores how T-Mobile uses Databricks to manage this data efficiently, focusing on scalable architecture with Delta Lake, auto-scaling clusters, performance optimization through data partitioning and caching and comprehensive data governance with Unity Catalog. Additionally, it covers cost management, collaborative tools and AI-driven productivity tools, highlighting how these strategies empower T-Mobile to innovate, streamline operations and maximize data impact across network optimization, supporting the community, energy management and more.

Multi-Format, Multi-Table, Multi-Statement Transactions on Unity Catalog

Get a first look at multi-statement transactions in Databricks. In this session, we will dive into their capabilities, exploring how multi-statement transactions enable atomic updates across multiple tables in your data pipelines, ensuring data consistency and integrity for complex operations. We will also share how we are enabling unified transactions across Delta Lake and Iceberg with Unity Catalog — powering our vision for an open and interoperable lakehouse.

Scaling Generative AI: Batch Inference Strategies for Foundation Models

Curious how to apply resource-intensive generative AI models across massive datasets without breaking the bank? This session reveals efficient batch inference strategies for foundation models on Databricks. Learn how to architect scalable pipelines that process large volumes of data through LLMs, text-to-image models and other generative AI systems while optimizing for throughput, cost and quality. Key takeaways: Implementing efficient batch processing patterns for foundation models using AI functions Optimizing token usage and prompt engineering for high-volume inference Balancing compute resources between CPU preprocessing and GPU inference Techniques for parallel processing and chunking large datasets through generative models Managing model weights and memory requirements across distributed inference tasks You'll discover how to process any scale of data through your generative AI models efficiently.

Serverless as the New "Easy Button": How HP Inc. Used Serverless to Turbocharge Their Data Pipeline

How do you wrangle over 8TB of granular “hit-level” website analytics data with hundreds of columns, all while eliminating the overhead of cluster management, decreasing runtime and saving money? In this session, we’ll dive into how we helped HP Inc. use Databricks serverless compute and Lakeflow Declarative Pipelines to streamline Adobe Analytics data ingestion while making it faster, cheaper and easier to operate. We’ll walk you through our full migration story — from managing unwieldy custom-defined AWS-based Apache Spark™ clusters to spinning up Databricks serverless pipelines and workflows with on-demand scalability and near-zero overhead. If you want to simplify infrastructure, optimize performance and get more out of your Databricks workloads, this session is for you.

Smashing Silos, Shaping the Future: Data for All in the Next-Gen Ecosystem

A successful data strategy requires the right platform and the ability to empower the broader user community by creating simple, scalable and secure patterns that lower the barrier to entry while ensuring robust data practices. Guided by the belief that everyone is a data person, we focus on breaking down silos, democratizing access and enabling distributed teams to contribute through a federated "data-as-a-product" model. We’ll share the impact and lessons learned in creating a single source of truth on Unity Catalog, consolidated from diverse sources and cloud platforms. We’ll discuss how we streamlined governance with Databricks Apps, Workflows and native capabilities, ensuring compliance without hindering innovation. We’ll also cover how we maximize the value of that catalog by leveraging semantics to enable trustworthy, AI-driven self-service in AI/BI dashboards and downstream apps. Come learn how we built a next-gen data ecosystem that empowers everyone to be a data person.

Sponsored by: DataNimbus | Building an AI Platform in 30 Days and Shaping the Future with Databricks

Join us as we dive into how Turnpoint Services, in collaboration with DataNimbus, built an Intelligence Platform on Databricks in just 30 days. We'll explore features like MLflow, LLMs, MLOps, Model Registry, Unity Catalog & Dashboard Alerts that powered AI applications such as Demand Forecasting, Customer 360 & Review Automation. Turnpoint’s transformation enabled data-driven decisions, ops efficiency & a better customer experience. Building a modern data foundation on Databricks optimizes resource allocation & drives engagement. We’ll also introduce innovations in DataNimbus Designer: AI Blocks: modular, prompt-driven smart transformers for text data, built visually & deployed directly within Databricks. These capabilities push the boundaries of what's possible on the Databricks platform. Attendees will gain practical insights, whether you're beginning your AI journey or looking to accelerate it.

Sponsored by: Genpact | Powering Change at GE Vernova: Inside One of the World’s Largest Databricks Migrations

How do you transform legacy data into a launchpad for next-gen innovation? GE Vernova is tackling it by rapidly migrating from outdated platforms to Databricks, building one of the world’s largest cloud data implementations. This overhaul wasn’t optional. Scaling AI, cutting technical debt, and slashing license costs demanded a bold, accelerated approach. Led by strategic decisions from the CDO and powered by Genpact’s AI Gigafactory, the migration is tackling 35+ Business and sub domains, 60,000+ data objects, 15,000+ jobs, 3000+ reports from 120+ diverse data sources to deliver a multi-tenant platform with unified governance. The anticipated results? Faster insights, seamless data sharing, and a standardized platform built for AI at scale. This session explores how Genpact and Databricks are fueling GE Vernova’s mission to deliver The Energy to Change the World—and what it takes to get there when speed, scale, and complexity are non-negotiable.

Sponsored by: Google Cloud | Building Powerful Agentic Ecosystems with Google Cloud's A2A

This session unveils Google Cloud's Agent2Agent (A2A) protocol, ushering in a new era of AI interoperability where diverse agents collaborate seamlessly to solve complex enterprise challenges. Join our panel of experts to discover how A2A empowers you to deeply integrate these collaborative AI systems with your existing enterprise data, custom APIs, and critical workflows. Ultimately, learn to build more powerful, versatile, and securely managed agentic ecosystems by combining specialized Google-built agents with your own custom solutions (Vertex AI or no-code). Extend this ecosystem further by serving these agents with Databricks Model Serving and governing them with Unity Catalog for consistent security and management across your enterprise.

Sponsored by: Informatica | Modernize analytics and empower AI in Databricks with trusted data using Informatica

As enterprises continue their journey to the cloud, data warehouse and data management modernization is essential to optimize analytics and drive business outcomes. Minimizing modernization timelines is important for reducing risk and shortening time to value – and ensuring enterprise data is clean, curated and governed is imperative to enable analytics and AI initiatives. In this session, learn how Informatica's Intelligent Data Management Cloud (IDMC) empowers analytics and AI on Databricks by helping data teams: · Develop no-code/low-code data pipelines that ingest, transform and clean data at enterprise scale · Improve data quality and extend enterprise governance with Informatica Cloud Data Governance and Catalog (CDGC) and Unity Catalog · Accelerate pilot-to-production with Mosaic AI

Tech Industry Session: Optimizing Costs and Controls to Democratize Data and AI

Join us for this session focused on how leading tech companies are enabling data intelligence across their organizations while maintaining cost efficiency and governance. Hear the successes and the challenges when Databricks empowers thousands of users—from engineers to business teams—by providing scalable tools for AI, BI and analytics. Topics include: Combining AI/BI and Lakehouse Apps to streamline workflows and accelerate insights Implementing systems tables, tagging and governance frameworks for granular control Democratizing data access while optimizing costs for large-scale analytical workloads Hear from customers and Databricks experts, followed by a customer panel featuring industry leaders. Gain insights into how Databricks helps tech innovators scale their platforms while maintaining operational excellence.

Telco Reimagined: Real-World Journeys in Data and AI for Customer Experience Transformation

How are today’s leading telecom operators transforming customer experience at scale with data and AI? Join us for an inspiring fireside chat with senior leaders from Optus, Plume and AT&T as they share their transformation stories — from the first steps to major milestones and the tangible business impact achieved with Databricks’ Data Intelligence Platform. You’ll hear firsthand how these forward-thinking CSP’s are driving measurable outcomes through unified data, machine learning and AI. Discover the high-impact use cases they’re prioritizing — like proactive care and hyper-personalization — and gain insight into their bold vision for the future of customer experience in telecom. Whether you're just beginning your AI journey or scaling to new heights, this session offers an authentic look at what’s working, what’s next and how data and AI are helping telecoms lead in a competitive landscape.

Unity Catalog Lakeguard: Secure and Efficient Compute for Your Enterprise

Modern data workloads span multiple sources — data lakes, databases, apps like Salesforce and services like cloud functions. But as teams scale, secure data access and governance across shared compute becomes critical. In this session, learn how to confidently integrate external data and services into your workloads using Spark and Unity Catalog on Databricks. We'll explore compute options like serverless, clusters, workflows and SQL warehouses, and show how Unity Catalog’s Lakeguard enforces fine-grained governance — even when concurrently sharing compute by multiple users. Walk away ready to choose the right compute model for your team’s needs — without sacrificing security or efficiency.

What’s New with Databricks Assistant: From Exploration to Production

Databricks Assistant helps you get from initial exploration all the way to production faster and easier than ever. In this session, we'll show you how Assistant simplifies and accelerates common workflows, boosting your productivity across notebooks and the SQL editor. You'll get practical tips, see end-to-end examples in action, and hear about the latest capabilities we're excited about. We'll also discuss how we're continually improving Assistant to make your development experience faster, more contextual and more customizable. Join us to discover how to get the most out of Databricks Assistant and empower your team to build better and faster.

Accelerating Data Transformation: Best Practices for Governance, Agility and Innovation

In this session, we will share NCS’s approach to implementing a Databricks Lakehouse architecture, focusing on key lessons learned and best practices from our recent implementations. By integrating Databricks SQL Warehouse, the DBT Transform framework and our innovative test automation framework, we’ve optimized performance and scalability, while ensuring data quality. We’ll dive into how Unity Catalog enabled robust data governance, empowering business units with self-serve analytical workspaces to create insights while maintaining control. Through the use of solution accelerators, rapid environment deployment and pattern-driven ELT frameworks, we’ve fast-tracked time-to-value and fostered a culture of innovation. Attendees will gain valuable insights into accelerating data transformation, governance and scaling analytics with Databricks.

Inscape Smart TV Data: Unlocking Consumption and Competitive Intelligence

With VIZIO's Inscape viewership data now available in the Databricks marketplace, our expansive dataset has never been easier to access. With real-time availability, flexible integrations, and secure, governed sharing, it's built for action.Join our team as we explore the full depth of this comprehensive data across both linear and streaming TV - showcasing real-world use cases like measuring the incremental reach of streaming or matching to 1st/3rd party data for ROI analyses. We will review our competitive intelligence through a share-of-voice analysis to provide the seamless steps to success.This session will show you how to turn Inscape data into a strategic advantage.

Reducing Transaction Conflicts in Databricks—Fundamentals and Applications at Asana

When using ACID-guaranteed transactions on Databricks concurrently, we can run into transaction conflicts. This talk discusses the basics of concurrent transaction functionality in Databricks—what happens when various combinations of INSERT, UPDATE and MERGE INTO happen concurrently. We discuss how table isolation level, partitioning and deletion vectors affect this. We also mention how Asana used an intermediate blind append stage to support several hundred concurrent transaction updates into the same table.