talk-data.com talk-data.com

Topic

Fabric

Microsoft Fabric

databricks data_plaform microsoft azure data_warehouse analytics data_analysis

323

tagged

Activity Trend

67 peak/qtr
2020-Q1 2026-Q1

Activities

323 activities · Newest first

In today’s data-led environment, fostering a robust data culture is no longer a nice-to-have, it’s a strategic necessity. This session will explore how forward-thinking organisations are embedding data into the very fabric of their culture, not only within technical teams but across marketing, operations, and senior leadership.

We’ll delve into how companies are aligning data strategy with customer experience, and how inclusive, high-performing teams are being built with diversity and empowerment at their core.

Drawing on examples from leading UK and global brands such as Unilever, Sainsbury’s, and Haleon, we’ll examine how scaling data literacy and trust across functions can unlock innovation and drive competitive advantage. Join us to discover what a strong data culture looks like in 2025, how to foster cross-functional collaboration, and what lies ahead for data-driven transformation.

Powered by: Women in Data®

The growth of connected data has made graph databases essential, yet organisations often face a dilemma: choosing between an operational graph for real-time queries or an analytical engine for large-scale processing. This division leads to data silos and complex ETL pipelines, hindering the seamless integration of real-time insights with deep analytics and the ability to ground AI models in factual, enterprise-specific knowledge. Google Cloud aims to solve this with a unified "Graph Fabric," introducing Spanner Graph, which extends Spanner with native support for the ISO standard Graph Query Language (GQL). This session will cover how Google Cloud has developed a Unified Graph Solution with BigQuery and Spanner graphs to serve a full spectrum of graph needs from operational to analytical.

Data platform migrations and modernisations take 18 months. 80% fail or are late. Teams waste 35% of their time rediscovering tribal knowledge. 'Vibe coding' causes 2000% compute spikes. Generic AI tools generate code without understanding your business logic, creating production disasters.

Fortune 500 companies are escaping this $280B crisis. One retail giant cut their SAP-to-Fabric migration from 18 to 6 months. Another achieved 85% error reduction in procurement analytics.

This session reveals how global enterprises capture organisational knowledge permanently, automate the entire data product lifecycle, and deliver production-ready analytics 3x faster. Learn the AI-first approach that transforms months of manual work into weeks of automated delivery.

Ready to move beyond passive data cataloging and unlock true AI-driven value? Join us for an in-depth session on data.world, now fully integrated with ServiceNow’s Workflow Data Fabric. We’ll show how you can unify, govern, and activate your enterprise data—across cloud, hybrid, and on-prem environments—to fuel agentic AI and intelligent automation. See a live demo of data.world’s knowledge graph in action: discover how to connect and contextualize data from any source, automate governance and compliance, and deliver trusted, explainable insights at scale. We’ll walk through real-world use cases, from rapid data discovery to automated policy enforcement and lineage tracking, and show how organizations are accelerating time-to-value and reducing risk. Whether you’re a data leader, architect, or practitioner, you’ll leave with practical strategies and a clear vision for making your data estate truly AI-ready. 

The future of healthcare depends not only on breakthroughs in science, but also on how we harness the power of data, technology, and AI. To realise this future, we must challenge long-held assumptions about how data products are delivered. What once took months of complex engineering now happens in days—or even hours—by re-imagining the way we work. At AstraZeneca, we shifted from a traditional IT-centric model to one where business teams take ownership, rapid prototyping drives innovation, and automation ensures quality, compliance, and trust.

 This change is more than a process improvement; it is a cultural transformation. By aligning every step to business value, embracing bold goals, and learning from failure, we have built a system that empowers people to innovate at speed and at scale. Data products are no longer the end goal but the enablers of something greater: a knowledge fabric ready for AI, where enterprise context unlocks smarter decisions and accelerates the delivery of life-changing medicines.

Our journey proves that when ambition meets courage, and technology meets purpose, we can transform the way data serves science—and, ultimately, transform the lives of patients around the world.

In today’s fragmented data landscape, organisations are under pressure to unify their data estates while maintaining agility, governance, and performance. This session explores how Microsoft Fabric, OneLake, and Azure Databricks come together to deliver a powerful, open, and integrated platform for centralised data orchestration—without compromise. From ingestion to insight, this session will showcase how “no excuses” becomes a reality when your data is truly unified, with a real-time demonstration highlighting the platform’s capabilities in action.

In this session, we will explore how organisations can leverage ArcGIS to analyse spatial data within their data platforms, such as Databricks and Microsoft Fabric. We will discuss the importance of spatial data and its impact on decision-making processes. The session will cover various aspects, including the ingestion of streaming data using ArcGIS Velocity, the processing and management of large volumes of spatial data with ArcGIS GeoAnalytics for Microsoft Fabric, and the use of ArcGIS for visualisation and advanced analytics with GeoAI. Join us to discover how these tools can provide actionable insights and enhance operational efficiency.

Materialized Lake Views (MLVs) in Microsoft Fabric Lakehouses offer a declarative approach to building data loading and data materialization pipelines. This session introduces how MLVs simplify data loading piplines, automate refreshes, and visualize data loading lineage across the process. Attendees will learn how MLVs support integrated data quality rules, enabling trend analysis and alerting for violations, all without manual pipeline orchestration. We'll also cover current limitations and the roadmap for new feature support.

About Andy: I am an Azure Data Platform professional working predominantly with Azure Synapse Analytics (SQL Pools), Data Factory, SQL Server and Power BI. I hold an MSc in Business Intelligence and Data Mining. I am a current Microsoft Data Platform MVP.

It’s now over six years since the emergence of the paper "How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh” by Zhamak Dehghani that had a major impact on the data and analytics industry. 

It highlighted major data architecture failures and called for a rethink in data architecture and in data provisioning by creating a data supply chain and democratising data engineering to enable business domain-oriented creation of reusable data products to make data products available as self-governing services. 

Since then, we have seen many companies adopt Data Mesh strategies, and the repositioning of some software products as well as the emergence of new ones to emphasize democratisation. But is what has happened since totally addressing the problems that Data Mesh was intending to solve? And what new problems are arising as organizations try to make data safely available to AI projects at machine-scale?  

In this unmissable session Big Data LDN Chair Mike Ferguson sits down with Zhamak Dehghani to talk about what has happened since Data Mesh emerged. It will look at:

● The drivers behind Data Mesh

● Revisiting Data Mesh to clarify on what a data product is and what Data Mesh is intending to solve

● Did data architecture really change or are companies still using existing architecture to implement this?

● What about technology to support this - Is Data Fabric the answer or best of breed tools? 

● How critical is organisation to successful Data Mesh implementation

● Roadblocks in the way of success e.g., lack of metadata standards

● How does Data Mesh impact AI?

● What’s next on the horizon?

Blast off into the future of analytics with this interstellar session on real-time data processing in Microsoft Fabric! From setting up KQL databases to capturing data streams from the International Space Station (yes, really!), this session is packed with hands-on tips and tricks. You'll explore how to harness the power of Event Streams and Logic Apps to create a seamless data pipeline that fuels real-time Power BI reports. Discover the basics of KQL—spoiler alert, it’s not SQL! Learn to query data like a tabular astronaut, watching your dashboards come to life with up-to-the-minute updates. With step-by-step guidance and plenty of cosmic creativity, you'll master real-time analytics faster than a rocket launch. Whether you're a data explorer, a real-time analytics enthusiast, or just curious about real-time tracking space missions, this session is your ticket to the stars. Prepare for liftoff as we explore how Microsoft Fabric can transform your streaming data into celestial insights.

Veel organisaties investeren fors in moderne dataplatformen, maar zien dat adoptie achterblijft. In deze sessie leer je hoe je vanuit ons 7-stappen model voor datagedreven werken (stap 2 - ‘Maak een plan’ en stap 6 - ‘Deel de kennis’) zorgt voor platformkeuzes die gedragen worden door de business. Inclusief tips om Azure, Databricks of Fabric niet alleen technisch, maar ook organisatorisch te laten landen.