talk-data.com talk-data.com

Topic

Databricks

big_data analytics spark

1286

tagged

Activity Trend

515 peak/qtr
2020-Q1 2026-Q1

Activities

1286 activities · Newest first

Discover how to launch serverless workspaces in seconds, power Data & AI apps with Lakebase, and analyze unstructured data without coding using Agent Bricks. With AI/BI Genie + Copilot Studio, ask natural language questions directly in Microsoft Teams. Join this session to see how Azure Databricks makes AI simple, actionable, and enterprise-ready.

Discover how to supercharge analytics and AI workflows using Azure Databricks and Microsoft Fabric. This hands-on lab explores native AI/BI features in Azure Databricks, including ML-powered insights and real-time analytics. Learn multiple ways to serve data to Power BI, with a deep dive into Direct Lake mode with Fabric. Ideal for developers, data scientists, data analysts, and engineers modernizing BI with lakehouse architecture in the AI era.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Unleashing SAP Databricks on Azure: Modernize, analyze, and innovate

SAP Databricks on Azure integrates Databricks Data Intelligence Platform with SAP Business Data Cloud, unifying SAP and external data for advanced analytics, AI, and ML. It enables building intelligent apps and actionable insights using trusted SAP and third-party business data. Available natively on Azure within SAP Business Data Cloud, it offers seamless access without data duplication via Delta Sharing. This session highlights automated forecasting, exploratory analysis, and BI use cases.

Building Agents with Agent Bricks and MCP

Want to create AI agents that can do more than just generate text? Join us to explore how combining Databricks' Agent Bricks with the Model Context Protocol (MCP) unlocks powerful tool-calling capabilities. We'll show you how MCP provides a standardized way for AI agents to interact with external tools, data and APIs, solving the headache of fragmented integration approaches. Learn to build agents that can retrieve both structured and unstructured data, execute custom code and tackle real enterprise challenges.

In federated data architectures, balancing team autonomy with accountability is a critical challenge. This presentation introduces Lakewatch, our governance model that transforms raw Databricks System Tables into actionable scorecards for cost efficiency and best practices. Learn how this approach drives centralized action, empowers domain teams, and eliminates governance bottlenecks.

Le groupe Egis conçoit et exploite des infrastructures complexes à l’échelle mondiale : autoroutes, aéroports, ferroviaire, bâtiments, services de mobilité, énergie, aménagement urbain et environnement. La diversité et le volume des données générées posent des défis majeurs en matière de gouvernance, d’industrialisation et de scalabilité.

Pour y répondre, Egis a déployé une infrastructure Data Mesh sur Azure, mise en place et opérée par une équipe dédiée. Cette équipe assure la conception, la gouvernance et la mise à disposition de l’architecture pour l’ensemble des Business Lines. L’infrastructure s’appuie sur :

• Stockage distribué avec ADLS Gen2,

• ETL et traitements big data avec Azure Data Factory et Databricks,

• Visualisation et partage sécurisé via Power BI Service et Delta Sharing,

• Des mécanismes de gouvernance avancés pour garantir interopérabilité et fiabilité.

Cette session présentera :

• Les choix d’architecture et patterns techniques pour mettre en place un Data Mesh distribué à l’échelle d’un grand groupe international

• Le rôle et l’organisation de l’équipe dédiée dans la mise à disposition et l’accompagnement des projets métiers

• Les enseignements pratiques tirés de cas d’usage concrets déjà en production

 

Une immersion au cœur de la mise en œuvre réelle du Data Mesh, pensée pour transformer la donnée en un actif accessible, fiable et exploitable à grande échelle par les équipes métiers et techniques.

In this 45 minute webinar, we’ll reveal the 5 biggest opportunities Databricks customers can act on today to deliver measurable business value, fast. This practical, business-focused session will cover: The 5 biggest opportunities Databricks customers overlook, and how to capture them; Quick wins you can deliver in weeks, not months, from governance to AI foundations; Real-world examples of how organisations are unlocking value today; How to align with your business stakeholders so your Databricks programme is seen as a driver of growth, not cost.

This talk explores how data science helps balance energy systems in the face of demand volatility, generation volatility, and the push for sustainability. We’ll dive into two technical case studies: churn prediction using survival models, and the design of a high-availability real-time trading system on Databricks. These examples illustrate how data can support operational resilience and sustainability efforts in the energy sector.

Face To Face
by Roberto Flores (Magnum Ice Cream Company (a division of Unilever))
API

In this session, we will explore the world of small language models, focusing on their unique advantages and practical applications. We will cover the basics of language models, the benefits of using smaller models, and provide hands-on examples to help beginners get started. By the end of the session, attendees will have a solid understanding of how to leverage small language models in their projects. The session will highlight the efficiency, customization, and adaptability of small models, making them ideal for edge devices and real-time applications.

We will introduce attendees to two highly used Small Language Models: Qwen3 and SmolLM3. Specifically, we will cover:

1. Accessing Models: How to navigate HuggingFace to explore and select available models. How to view model documentation and determine its usefulness for specific tasks

2. Deployment: How to get started using

(a) Inference Provider - using HuggingFace inference API or Google CLI

(b) On-Tenant - using Databricks Model Serving

(c) Running the Model Locally - Using Ollama and LMstudio

3. We also examine the tradeoffs of each route

Join Sami Hero and Tammie Coles, as they share how Ellie is reinventing data modeling with AI-native tools that empower both technical and non-technical users. With CData Embedded Cloud, Ellie brings live metadata and data models from systems like Snowflake, Databricks, and Oracle Financials into a unified modeling workspace. Their platform translates legacy structures into human-readable insights, letting users interact with a copilot-style assistant to discover, refine, and maintain data models faster—with less reliance on analysts.

You’ll see how Ellie uses generative AI to recommend new entities, reconcile differences between models and live systems, and continuously document evolving data environments. Learn how corporations are using Ellie and CData together to scale high-quality data modeling across teams. reducing rework, accelerating delivery of analytics-ready models, and making enterprise architecture accessible to the business.

Want to get your GenAI idea noticed? Databricks engineers share their hands-on experiences building interactive demos that actually made business leaders sit up and take notice.

We’ll walk through the journey from a single idea to a working prototype in under a month. Hear how we did it, what worked, what didn’t, including the unexpected hurdles that tripped us up, by taking a practical look at how to:

  • Translate technical impact into business value
  • Make your voice heard in large dev teams
  • Avoid common pitfalls, from permissions to procurement

If you’re a data scientist, engineer, or AI leader who wants to move fast and make your work impossible to ignore, join us to explore how you could create the Minimum Viable Product that makes you the Most Valuable Player.