talk-data.com talk-data.com

Topic

Data Streaming

realtime event_processing data_flow

10

tagged

Activity Trend

70 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Big Data LDN 2025 ×

75% of GenAI projects fail to scale—not because the models lack sophistication, but because they’re built on fragmented data. If your systems don’t know who they're talking about, how can your AI deliver reliable insights?

This talk unveils how real-time Entity Resolution (ER) is becoming the silent engine behind trusted, AI-ready data architecture. We will discuss how organizations across financial services, public safety, and digital platforms are embedding ER into modern data stacks—delivering identity clarity, regulatory confidence, and faster outcomes without the drag of legacy MDM.

You’ll learn:

  • Why ER is foundational for AI trust, governance, and analytics
  • Patterns for embedding ER into streaming and event-driven architectures
  • How ecosystem partners and data platforms are amplifying ER value
  • How to build trust at the entity level—without slowing down innovation

Whether you’re modernizing architecture, launching AI programs, or tightening compliance, this session will equip you to embed trust from the ground up.

Data leaders today face a familiar challenge: complex pipelines, duplicated systems, and spiraling infrastructure costs. Standardizing around Kafka for real-time and Iceberg for large-scale analytics has gone some way towards addressing this but still requires separate stacks, leaving teams to stitch them together at high expense and risk.

This talk will explore how Kafka and Iceberg together form a new foundation for data infrastructure. One that unifies streaming and analytics into a single, cost-efficient layer. By standardizing on these open technologies, organizations can reduce data duplication, simplify governance, and unlock both instant insights and long-term value from the same platform.

You will come away with a clear understanding of why this convergence is reshaping the industry, how it lowers operational risk, and advantages it offers for building durable, future-proof data capabilities.

We are entering the Era of Experience, where AI agents will transform customer journeys by learning directly from interactions. But most customer-facing agents today are “senseless,” lacking the real-time context needed to deliver relevant, empathetic, and valuable experiences. This session will explore how real-time streaming architectures and proprietary customer data can power the next generation of intelligent, perceptive agents.

Join Snowplow’s Jon Su as he unpacks:

  • Why brands risk commoditization if they rely on third-party agents
  • How real-time context enables smarter, more personalized customer interactions
  • The key ingredients for building agents that perceive, adapt, and self-optimize
  • How Snowplow Signals provides the real-time customer intelligence foundation for agentic applications

Discover how to shift from static personalization to adaptive, agent-driven experiences that improve customer satisfaction, loyalty, and business outcomes.

Data leaders today face a familiar challenge: complex pipelines, duplicated systems, and spiraling infrastructure costs. Standardizing around Kafka for real-time and Iceberg for large-scale analytics has gone some way towards addressing this but still requires separate stacks, leaving teams to stitch them together at high expense and risk.

This talk will explore how Kafka and Iceberg together form a new foundation for data infrastructure. One that unifies streaming and analytics into a single, cost-efficient layer. By standardizing on these open technologies, organizations can reduce data duplication, simplify governance, and unlock both instant insights and long-term value from the same platform.

You will come away with a clear understanding of why this convergence is reshaping the industry, how it lowers operational risk, and advantages it offers for building durable, future-proof data capabilities.

The most sought-after products don’t just appear on shelves—they arrive at the perfect moment, in perfect condition, thanks to data that works as fast as the business moves.

From premium meats to peak-season produce, Morrisons, one of the UK’s largest retailers, is building a future where shelves are stocked with exactly what customers want, when they want it.

In this session, Peter Laflin, Chief Data Officer at Morrisons, joins Striim to share how real-time data streaming into Google Cloud enables smarter, faster, and more autonomous retail operations. He’ll unpack how Morrisons is moving beyond predictive models to build AI-native, agentic systems that can sense, decide, and act at scale. Topics include:

Live store operations that respond instantly to real-world signals

AI architectures that move from “data-informed” to “data-delegated” decisions

Practical lessons from embedding real-time thinking across teams and tech stacks

This is a session for retail and data leaders who are ready to move beyond dashboards and start building intelligent systems that deliver both customer delight and operational agility.

Data leaders today face a familiar challenge: complex pipelines, duplicated systems, and spiraling infrastructure costs. Standardizing around Kafka for real-time and Iceberg for large-scale analytics has gone some way towards addressing this but still requires separate stacks, leaving teams to stitch them together at high expense and risk.

This talk will explore how Kafka and Iceberg together form a new foundation for data infrastructure. One that unifies streaming and analytics into a single, cost-efficient layer. By standardizing on these open technologies, organizations can reduce data duplication, simplify governance, and unlock both instant insights and long-term value from the same platform.

You will come away with a clear understanding of why this convergence is reshaping the industry, how it lowers operational risk, and advantages it offers for building durable, future-proof data capabilities.

In this session, Paul Wilkinson, Principal Solutions Architect at Redpanda, will demonstrate Redpanda's native Iceberg capability: a game-changing addition that bridges the gap between real-time streaming and analytical workloads, eliminating the complexity of traditional data lake architectures while maintaining the performance and simplicity that Redpanda is known for.

Paul will explore how this new capability enables organizations to seamlessly transition streaming data into analytical formats without complex ETL pipelines or additional infrastructure overhead in a follow-along demo - allowing you to build your own streaming lakehouse and show it to your team!

Moving data between operational systems and analytics platforms is often a painful process. Traditional pipelines that transfer data in and out of warehouses tend to become complex, brittle, and expensive to maintain over time.

Much of this complexity, however, is avoidable. Data in motion and data at rest—Kafka Topics and Iceberg Tables—can be treated as two sides of the same coin. By establishing an equivalence between Topics and Tables, it’s possible to transparently map between them and rethink how pipelines are built.

This talk introduces a declarative approach to bridging streaming and table-based systems. By shifting complexity into the data layer, we can decompose complex, imperative pipelines into simpler, more reliable workflows

We’ll explore the design principles behind this approach, including schema mapping and evolution between Kafka and Iceberg, and how to build a system that can continuously materialize and optimize hundreds of thousands of topics as Iceberg tables.

Whether you're building new pipelines or modernizing legacy systems, this session will provide practical patterns and strategies for creating resilient, scalable, and future-proof data architectures.

In this session, we will explore how organisations can leverage ArcGIS to analyse spatial data within their data platforms, such as Databricks and Microsoft Fabric. We will discuss the importance of spatial data and its impact on decision-making processes. The session will cover various aspects, including the ingestion of streaming data using ArcGIS Velocity, the processing and management of large volumes of spatial data with ArcGIS GeoAnalytics for Microsoft Fabric, and the use of ArcGIS for visualisation and advanced analytics with GeoAI. Join us to discover how these tools can provide actionable insights and enhance operational efficiency.