talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

SAP Business Data Cloud is a fully managed solution that unifies and governs all SAP data while seamlessly integrating with third-party sources. With SAP Business Data Cloud, organisations can accelerate decision-making by empowering business users to make more impactful choices. It also provides a trusted foundation for AI, ensuring that data across applications and operations is reliable, responsible, and relevant—enabling organisations to harness the full potential of generative AI.

Edmund Optics stands at the forefront of advanced manufacturing, distributing more than 34,000 products and customised solutions in optics, photonics and imaging to a range of industries across the globe. Just a year ago, Edmund Optics began an ambitious journey to transform its data science capabilities, aiming to use Machine Learning (ML) and AI to deliver real value to their business and customers.  

Join us for an engaging panel discussion featuring Daniel Adams, Global Analytics Manager at Edmund Optics, as he shares the company's remarkable transformation from having no formal data science capabilities to deploying multiple ML and AI models in production—all within just 12 months. Daniel will highlight how Edmund Optics cultivated internal enthusiasm for data solutions, built trust, and created momentum to push the boundaries of what’s possible with data. 

In this session, Daniel will reveal three key lessons learned on the journey from “data zero” to “data hero.” If you’re navigating a similar path, don’t miss this opportunity to discover actionable insights and strategies that can empower your own internal data initiatives.

AI can now write, code, analyse, design, and in many cases, do the work we once saw as uniquely human. So where does that leave us?

In this session, Jason Foster, CEO of Cynozure, explores what it means to lead and contribute when the tools around us are evolving faster than our roles. This isn’t just a technology shift — it’s a people, leadership, and organisational shift.

Jason will explore how the rise of AI is reshaping roles across every level of an organisation, from task execution to strategic thinking, and what it takes to operate, lead and deliver value when the work itself is changing. He’ll share how individuals and teams can rethink their role, build human-AI collaboration, and stay ahead by focusing on what only humans can uniquely do.

Whether you’re a senior leader or hands-on practitioner, you’ll leave with a clearer view of how to evolve your role, build human-AI teams, and move from just doing work to shaping the future of it.

In today’s landscape, data truly is the new currency. But unlocking its full value requires overcoming silos, ensuring trust and quality, and then applying the right AI and analytics capabilities to create real business impact. In this session, we’ll explore how Oakbrook Finance is tackling these challenges head-on — and the role that Fivetran and Databricks play in enabling that journey.

Oakbrook Finance is a UK-based consumer lender transforming how people access credit. By combining advanced data science with a customer-first approach, Oakbrook delivers fair, transparent, and flexible credit solutions — proving that lending can be both innovative and human-centred.

Graph-based Retrieval-Augmented Generation (GraphRAG) enhances large language models (LLMs) by grounding their responses in structured knowledge graphs, offering more accurate, domain-specific, and explainable outputs. However, many of the graphs used in these pipelines are automatically generated or loosely assembled, and often lack the semantic structure, consistency, and clarity required for reliable grounding. The result is misleading retrieval, vague or incomplete answers, and hallucinations that are difficult to trace or fix.

This hands-on tutorial introduces a practical approach to evaluating and improving knowledge graph quality in GraphRAG applications. We’ll explore common failure patterns, walk through real-world examples, and share a reusable checklist of features that make a graph “AI-ready.” Participants will learn methods for identifying gaps, inconsistencies, and modeling issues that prevent knowledge graphs from effectively supporting LLMs, and apply simple fixes to improve grounding and retrieval performance in their own projects.

So you’ve heard of Databricks, but still not sure what the fuss is all about. Yes you’ve heard it’s Spark, but then there’s this Delta thing that’s both a data lake and a data warehouse (isn’t that what Iceberg is?) And then there's Unity Catalog, that's not just a catalog, it also does access management but even surprising things like optimise your data and programmatic access to lineage and billing? But then serverless came out and now you don’t even have to learn Spark? And of course there’s a bunch of AI stuff to use or create yourself. So why not spend 30 mins learning the details of what Databricks does, and how it can turn you into a rockstar Data Engineer.

Discover how Dun & Bradstreet and other global enterprises use Data Observability to ensure Quality & Efficiency, and enforce compliance across on-prem and cloud environments. Learn proven strategies to operationalize governance, accelerate cloud migrations, and deliver trusted data for AI and analytics at scale. Join us to learn how Data Observability and Agentic Data Management empowers leaders, engineers, and business teams to drive efficiency and savings at petabyte scale.

In today’s data-saturated world, the real challenge isn’t collecting more data, it’s transforming it into trusted, usable products that drive innovation, efficiency, and measurable business impact. In this session, Matt Webb, Head of Asset Information at UK Power Networks, and Franck Carassus, Co-founder and CSO at Opendatasoft, will share how UKPN is embracing a data product approach with the support of Opendatasoft to break down silos, accelerate collaboration across teams, and make data a real driver of business performance. With Opendatasoft, UKPN has built a public-facing Data Product Marketplace that ensures every dataset is accessible, understandable, and actionable — not only for technical teams, but also for business users and external partners. Together, they are creating data products that combine high-quality metadata, intuitive interfaces, and built-in observability, making them both human-friendly and AI-ready. This session will highlight the tangible benefits of this partnership: faster access to information, increased adoption of data across the organization, and a scalable foundation to prepare for the AI-driven future. If your organization wants to maximize the value of its data while delivering a seamless user experience, it will provide practical inspiration.

AI-powered development tools are accelerating development speed across the board and analytics event implementation is no exception to this, but without appropriate usage they’re very capable of creating organizational chaos. Same company, same prompt, completely different schemas—data teams can’t analyze what should be identical events across platforms.

The infrastructure assumptions that worked when developers shipped tracking changes in sprint cycles or quarters are breaking when they ship them multiple times per day. Schema inconsistency, cost surprises from experimental traffic, and trust erosion in AI-generated code are becoming the new normal.

Josh will demonstrate how Snowplow’s MCP (Model Context Protocol) server and data-structure toolchains enable teams to harness AI development speed while maintaining data quality and architectural consistency. Using Snowplow’s production approach of AI-powered design paired with deterministic implementation, teams get rapid iteration without the hallucination bugs that plague direct AI code generation.

Key Takeaways:

• How AI development acceleration is fragmenting analytics schemas within organizations

• Architectural patterns that separate AI creativity from production reliability

• Real-world implementation using MCP, Data Products, and deterministic code generation

In an era where data complexity and scale challenge every organization, manual intervention can no longer keep pace. Prizm by DQLabs redefines the paradigm—offering a no-touch, agentic data platform that seamlessly integrates Data Quality, Observability, and Semantic Intelligence into one self-learning, self-optimizing ecosystem.

Unlike legacy systems Prizm is AI native, it is Agentic by Design, built from the ground up around a network of intelligent, role-driven agents that observe, recommend, act, and learn in concert to deliver continuous, autonomous data trust.

Join us at Big Data London to Discover how Prizm’s agent-driven anomaly detection, data quality enforcement, and deep semantic analysis set a new industry standard—shifting data and AI trust from an operational burden to a competitive advantage that powers actionable, insight-driven outcomes.

Large Language Models (LLMs) are transformative, but static knowledge and hallucinations limit their direct enterprise use. Retrieval-Augmented Generation (RAG) is the standard solution, yet moving from prototype to production is fraught with challenges in data quality, scalability, and evaluation.

This talk argues the future of intelligent retrieval lies not in better models, but in a unified, data-first platform. We'll demonstrate how the Databricks Data Intelligence Platform, built on a Lakehouse architecture with integrated tools like Mosaic AI Vector Search, provides the foundation for production-grade RAG.

Looking ahead, we'll explore the evolution beyond standard RAG to advanced architectures like GraphRAG, which enable deeper reasoning within Compound AI Systems. Finally, we'll show how the end-to-end Mosaic AI Agent Framework provides the tools to build, govern, and evaluate the intelligent agents of the future, capable of reasoning across the entire enterprise.

When signal drops, so does customer sentiment. That’s why today’s telecoms providers can no longer rely on yesterday’s data—they need insights in the moment.

Industry leader VMO2 is raising the bar by reimagining how data moves through the business—from network diagnostics to personalised offer delivery—by building a real-time foundation with Striim.

In this session, Vinay Pai, Head of Data Architecture at VMO2, will share how the company has transitioned from fragmented, on-premises systems to an intelligent, real-time data platform that enables proactive customer experiences and greater operational agility. Topics include:

- How VMO2 detects and resolves network issues before customers even pick up the phone

- Delivering truly personalised offers and dynamic pricing across digital touchpoints

- Accelerating new product delivery with a modular, event-driven architecture

- Key lessons from reducing churn and improving retention in a fiercely competitive market

This session is ideal for data leaders aiming to modernise legacy infrastructure, embed AI into operations, and deliver real-time customer experiences that make a tangible impact.

Face To Face
by Aaron Baker (Multiverse) , Jane Crowe (UK Ministry of Defence) , Kash Nejad (Multiverse)

Multiverse is proud to host the Ministry of Defence (MOD) on stage at Big Data LDN to discuss their pioneering partnership focused on building data skills and capabilities across the defence sector. As organisations worldwide navigate the transformative potential of AI and advanced analytics, investing in staff development has become a strategic imperative. This partnership is already making tangible impact: over 250 MOD employees are currently enrolled in upskilling programmes designed to strengthen data literacy, enhance analytical capabilities, and embed a culture of continuous learning. The initiative equips personnel to leverage data effectively, driving smarter decision-making and supporting the MOD’s ongoing Strategic Defence Reform agenda.

Speakers will share insights into how targeted learning interventions and personalised development pathways can accelerate organisational capability while delivering measurable outcomes. Attendees will hear first-hand how the collaboration between Multiverse and the MOD has delivered early successes, fostered a growth mindset among staff, and positioned the MOD to scale these programmes far beyond their current reach. This session offers a unique opportunity for leaders and practitioners alike to explore the intersection of talent investment, AI adoption, and data-driven transformation, demonstrating how strategic upskilling can future-proof organisations in an increasingly complex data landscape.

Face To Face
by Siddharth Rajagopal (Data as the Fourth Pillar) , Sujay Dutta (Data as the Fourth Pillar)

Reason why Data should be the Fourth Pillar for every enterprise. The Board, CEOs, and CxOs must understand why they should treat data strategically. Enterprises’ use cases like AI drive the need for data high in quality, compliance, and speed dimensions. 

- Present a framework for enterprises to understand their current data challenges. 

- Key principles for the data pillar 

- Role of the Chief Data Officer (CDO) - nurture demand for data while taking steps to fulfill the supply of demand through an agile data operating model (DOM). The DOM enabled by people, processes, and technologies. 

- Measuring the impact provided by the data pillar, introduce KPIs such as Total Addressable Value through data (TAV) and Expected Addressable Value through data (EAV). 

- A Maturity Framework for every enterprise to track and progress its data maturity journey.

In the age of agentic AI, competitive advantage lies not only in AI models, but in the quality of the data agents reason on and the agility of the tools that feed them. To fully realize the ROI of agentic AI, organizations need a platform that enables high-quality data pipelines and provides scalable, enterprise-grade tools. In this session, discover how a unified platform for integration, data management, MCP server management, API management, and agent orchestration can help you to bring cohesion and control to how data and agents are used across your organization.

In an era where AI is rapidly transforming industries, leveraging AI in a responsible, compliant and sustainable way is more crucial than ever. Join us for an insightful session on ISO 42001, the new standard for AI compliance. James Lupton, Cynozure's CTO, will demystify the complexities of AI governance, sharing practical steps and help you decide whether ISO 42001 is right for your organisation. In this 30 minute session, James will dive into:

 

What ISO 42001 covers and why it matters for your AI practices

How you can tailor the standard to your needs

Practical strategies for getting started with the standard in your organisation