When customers enjoy Gousto's recipe kits, they see the delicious result but not the careful steps it takes to get there. Data works the same way. In this session, Yanick will share how Gousto built a business case for analytics engineering, making an often-invisible discipline central to the company's strategy. He'll unpack how the team moved from ad-hoc outputs to a structured, mesh-ready approach, reducing complexity, proving ROI, and giving leadership confidence in data as a competitive advantage.
talk-data.com
Topic
Analytics
4552
tagged
Activity Trend
Top Events
Most enterprise AI initiatives don’t fail because of bad models. They fail because of bad data. As organizations rush to integrate LLMs and advanced analytics into production, they often hit a roadblock: datasets that are messy, constantly evolving, and nearly impossible to manage at scale.
This session reveals why data is the Achilles’ heel of enterprise AI and how data version control can turn that weakness into a strength. You’ll learn how data version control transforms the way teams manage training datasets, track ML experiments, and ensure reproducibility across complex, distributed systems.
We’ll cover the fundamentals of data versioning, its role in modern enterprise AI architecture, and real-world examples of teams using it to build scalable, trustworthy AI systems.
Whether you’re an ML engineer, data architect, or AI leader, this talk will help you identify critical data challenges before they stall your roadmap, and provide you with a proven framework to overcome them.
Edmund Optics stands at the forefront of advanced manufacturing, distributing more than 34,000 products and customised solutions in optics, photonics and imaging to a range of industries across the globe. Just a year ago, Edmund Optics began an ambitious journey to transform its data science capabilities, aiming to use Machine Learning (ML) and AI to deliver real value to their business and customers.
Join us for an engaging panel discussion featuring Daniel Adams, Global Analytics Manager at Edmund Optics, as he shares the company's remarkable transformation from having no formal data science capabilities to deploying multiple ML and AI models in production—all within just 12 months. Daniel will highlight how Edmund Optics cultivated internal enthusiasm for data solutions, built trust, and created momentum to push the boundaries of what’s possible with data.
In this session, Daniel will reveal three key lessons learned on the journey from “data zero” to “data hero.” If you’re navigating a similar path, don’t miss this opportunity to discover actionable insights and strategies that can empower your own internal data initiatives.
In today’s landscape, data truly is the new currency. But unlocking its full value requires overcoming silos, ensuring trust and quality, and then applying the right AI and analytics capabilities to create real business impact. In this session, we’ll explore how Oakbrook Finance is tackling these challenges head-on — and the role that Fivetran and Databricks play in enabling that journey.
Oakbrook Finance is a UK-based consumer lender transforming how people access credit. By combining advanced data science with a customer-first approach, Oakbrook delivers fair, transparent, and flexible credit solutions — proving that lending can be both innovative and human-centred.
So you’ve heard of Databricks, but still not sure what the fuss is all about. Yes you’ve heard it’s Spark, but then there’s this Delta thing that’s both a data lake and a data warehouse (isn’t that what Iceberg is?) And then there's Unity Catalog, that's not just a catalog, it also does access management but even surprising things like optimise your data and programmatic access to lineage and billing? But then serverless came out and now you don’t even have to learn Spark? And of course there’s a bunch of AI stuff to use or create yourself. So why not spend 30 mins learning the details of what Databricks does, and how it can turn you into a rockstar Data Engineer.
Discover how Dun & Bradstreet and other global enterprises use Data Observability to ensure Quality & Efficiency, and enforce compliance across on-prem and cloud environments. Learn proven strategies to operationalize governance, accelerate cloud migrations, and deliver trusted data for AI and analytics at scale. Join us to learn how Data Observability and Agentic Data Management empowers leaders, engineers, and business teams to drive efficiency and savings at petabyte scale.
AI-powered development tools are accelerating development speed across the board and analytics event implementation is no exception to this, but without appropriate usage they’re very capable of creating organizational chaos. Same company, same prompt, completely different schemas—data teams can’t analyze what should be identical events across platforms.
The infrastructure assumptions that worked when developers shipped tracking changes in sprint cycles or quarters are breaking when they ship them multiple times per day. Schema inconsistency, cost surprises from experimental traffic, and trust erosion in AI-generated code are becoming the new normal.
Josh will demonstrate how Snowplow’s MCP (Model Context Protocol) server and data-structure toolchains enable teams to harness AI development speed while maintaining data quality and architectural consistency. Using Snowplow’s production approach of AI-powered design paired with deterministic implementation, teams get rapid iteration without the hallucination bugs that plague direct AI code generation.
Key Takeaways:
• How AI development acceleration is fragmenting analytics schemas within organizations
• Architectural patterns that separate AI creativity from production reliability
• Real-world implementation using MCP, Data Products, and deterministic code generation
Multiverse is proud to host the Ministry of Defence (MOD) on stage at Big Data LDN to discuss their pioneering partnership focused on building data skills and capabilities across the defence sector. As organisations worldwide navigate the transformative potential of AI and advanced analytics, investing in staff development has become a strategic imperative. This partnership is already making tangible impact: over 250 MOD employees are currently enrolled in upskilling programmes designed to strengthen data literacy, enhance analytical capabilities, and embed a culture of continuous learning. The initiative equips personnel to leverage data effectively, driving smarter decision-making and supporting the MOD’s ongoing Strategic Defence Reform agenda.
Speakers will share insights into how targeted learning interventions and personalised development pathways can accelerate organisational capability while delivering measurable outcomes. Attendees will hear first-hand how the collaboration between Multiverse and the MOD has delivered early successes, fostered a growth mindset among staff, and positioned the MOD to scale these programmes far beyond their current reach. This session offers a unique opportunity for leaders and practitioners alike to explore the intersection of talent investment, AI adoption, and data-driven transformation, demonstrating how strategic upskilling can future-proof organisations in an increasingly complex data landscape.
Moving data between operational systems and analytics platforms is often a painful process. Traditional pipelines that transfer data in and out of warehouses tend to become complex, brittle, and expensive to maintain over time.
Much of this complexity, however, is avoidable. Data in motion and data at rest—Kafka Topics and Iceberg Tables—can be treated as two sides of the same coin. By establishing an equivalence between Topics and Tables, it’s possible to transparently map between them and rethink how pipelines are built.
This talk introduces a declarative approach to bridging streaming and table-based systems. By shifting complexity into the data layer, we can decompose complex, imperative pipelines into simpler, more reliable workflows
We’ll explore the design principles behind this approach, including schema mapping and evolution between Kafka and Iceberg, and how to build a system that can continuously materialize and optimize hundreds of thousands of topics as Iceberg tables.
Whether you're building new pipelines or modernizing legacy systems, this session will provide practical patterns and strategies for creating resilient, scalable, and future-proof data architectures.
In this session, we will explore how organisations can leverage ArcGIS to analyse spatial data within their data platforms, such as Databricks and Microsoft Fabric. We will discuss the importance of spatial data and its impact on decision-making processes. The session will cover various aspects, including the ingestion of streaming data using ArcGIS Velocity, the processing and management of large volumes of spatial data with ArcGIS GeoAnalytics for Microsoft Fabric, and the use of ArcGIS for visualisation and advanced analytics with GeoAI. Join us to discover how these tools can provide actionable insights and enhance operational efficiency.
In this short presentation, Big Data LDN Conference Chairman and Europe’s leading IT Industry Analyst in Data Management and Analytics, Mike Ferguson, will welcome everyone to Big Data LDN 2025. He will also summarise where companies are in data, analytics and AI in 2025, what the key challenges and trends are, how are these trends impacting on how companies build a data-driven enterprise and where you can find out more about these at the show.
It’s now over six years since the emergence of the paper "How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh” by Zhamak Dehghani that had a major impact on the data and analytics industry.
It highlighted major data architecture failures and called for a rethink in data architecture and in data provisioning by creating a data supply chain and democratising data engineering to enable business domain-oriented creation of reusable data products to make data products available as self-governing services.
Since then, we have seen many companies adopt Data Mesh strategies, and the repositioning of some software products as well as the emergence of new ones to emphasize democratisation. But is what has happened since totally addressing the problems that Data Mesh was intending to solve? And what new problems are arising as organizations try to make data safely available to AI projects at machine-scale?
In this unmissable session Big Data LDN Chair Mike Ferguson sits down with Zhamak Dehghani to talk about what has happened since Data Mesh emerged. It will look at:
● The drivers behind Data Mesh
● Revisiting Data Mesh to clarify on what a data product is and what Data Mesh is intending to solve
● Did data architecture really change or are companies still using existing architecture to implement this?
● What about technology to support this - Is Data Fabric the answer or best of breed tools?
● How critical is organisation to successful Data Mesh implementation
● Roadblocks in the way of success e.g., lack of metadata standards
● How does Data Mesh impact AI?
● What’s next on the horizon?
Data storytelling matters more than ever. If you have the ability to make your analysis understood—and acted on—it can make you more valuable than analysts with twice your experience. In this episode, Mike Cisneros walks us through his practical, tactical playbook to turn good analysis into powerful data stories that get results. ✨ Try Julius today at https://landadatajob.com/Julius-YT
Make your data storytelling sing. Check out Mike's co-authored book here: Storytelling with Data: Before and After - Practical Makeovers for Powerful Data Stories Amazon link: https://amzn.to/41ViFmv Website: storytellingwithdata.com/books
💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator
⌚ TIMESTAMPS 00:00 Introduction 01:16 How To Become A Better Data Analyst and Storyteller 04:41 Storytelling with Data: Before and After 15:33 A Case Study: Analyzing Call Center Data
🔗 CONNECT WITH MIKE 🎥 YouTube Channel: https://www.youtube.com/c/storytellingwithdata 🤝 LinkedIn: https://www.linkedin.com/in/mikevizneros/ https://www.linkedin.com/company/storytelling-with-data-llc/ 📸 Instagram: https://www.instagram.com/mikevizneros/ 💻 Website: https://www.storytellingwithdata.com/
🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!
To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more
If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.
👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa
Comprehensive guide offering actionable strategies for enhancing human-centered AI, efficiency, and productivity in industrial and systems engineering through the power of AI. Advances in Artificial Intelligence Applications in Industrial and Systems Engineering is the first book in the Advances in Industrial and Systems Engineering series, offering insights into AI techniques, challenges, and applications across various industrial and systems engineering (ISE) domains. Not only does the book chart current AI trends and tools for effective integration, but it also raises pivotal ethical concerns and explores the latest methodologies, tools, and real-world examples relevant to today’s dynamic ISE landscape. Readers will gain a practical toolkit for effective integration and utilization of AI in system design and operation. The book also presents the current state of AI across big data analytics, machine learning, artificial intelligence tools, cloud-based AI applications, neural-based technologies, modeling and simulation in the metaverse, intelligent systems engineering, and more, and discusses future trends. Written by renowned international contributors for an international audience, Advances in Artificial Intelligence Applications in Industrial and Systems Engineering includes information on: Reinforcement learning, computer vision and perception, and safety considerations for autonomous systems (AS) (NLP) topics including language understanding and generation, sentiment analysis and text classification, and machine translation AI in healthcare, covering medical imaging and diagnostics, drug discovery and personalized medicine, and patient monitoring and predictive analysis Cybersecurity, covering threat detection and intrusion prevention, fraud detection and risk management, and network security Social good applications including poverty alleviation and education, environmental sustainability, and disaster response and humanitarian aid. Advances in Artificial Intelligence Applications in Industrial and Systems Engineering is a timely, essential reference for engineering, computer science, and business professionals worldwide.