talk-data.com talk-data.com

Topic

MLOps

machine_learning devops ai

6

tagged

Activity Trend

26 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Big Data LDN 2025 ×

Energy flexibility is playing an increasingly fundamental role in the UK energy market. With the adoption of renewable energy sources such as EVs, solar panels and domestic and commercial batteries, the number of flexible assets is soaring - making aggregation and flexibility trading infinitely more complex and requiring vast amounts of data modelling and forecasting. To address this challenge, Flexitricity adopted MLOps best practices to tackle this complex real-world challenge and meet the needs of the scaling energy demand in the UK. 

The session will cover:

- The complex technical challenge of energy flexibility in 2025.

- The critical requirement to invest in technology and skillsets.

- A real-life view of how machine learning operations (MLOps) scaled Flexitricity’s data science model development.

- How innovations in technology can support and optimise delivering on energy flexibility. 

The audience will gain insight into:

- The challenge of building data science models to keep up with scaling demand.

- How MLOps best practices can be adopted to drive efficiency and increase data science experiments to 10000+ per year.

- Lessons learned from adopting MLOps pipelines.

The Generative AI revolution is here, but so is the operational headache. For years, teams have matured their MLOps practices for traditional models, but the rapid adoption of LLMs has introduced a parallel, often chaotic, world of LLMOps. This results in fragmented toolchains, duplicated effort, and a state of "Ops Overload" that slows down innovation.

This session directly confronts this challenge. We will demonstrate how a unified platform like Google Cloud's Vertex AI can tame this complexity by providing a single control plane for the entire AI lifecycle.

As AI adoption accelerates across industries, many organisations are realising that building a model is only the beginning. Real-world deployment of AI demands robust infrastructure, clean and connected data, and secure, scalable MLOps pipelines. In this panel, experts from across the AI ecosystem share lessons from the frontlines of operationalising AI at scale.

We’ll dig into the tough questions:

• What are the biggest blockers to AI adoption in large enterprises — and how can we overcome them?

• Why does bad data still derail even the most advanced models, and how can we fix the data quality gap?

• Where does synthetic data fit into real-world AI pipelines — and how do we define “real” data?

• Is Agentic AI the next evolution, or just noise — and how should MLOps prepare?

• What does a modern, secure AI stack look like when using external partners and APIs?

Expect sharp perspectives on data integration, model lifecycle management, and the cyber-physical infrastructure needed to make AI more than just a POC.

The rapid evolution of AI, fueled by powerful Large Language Models (LLMs) and autonomous agents, is reshaping how we build, deploy, and manage AI systems. This presentation explores the critical intersection of MLOps and AI architecture, highlighting the paradigm shifts required to integrate LLMs and agents into production. We will address key architectural challenges, including scalability, observability, and security, while examining emerging MLOps practices such as robust data pipelines, model monitoring, and continuous optimization. Attendees will gain practical insights and actionable strategies to navigate the complexities of modern AI deployments, unlocking the full potential of LLMs and agents while ensuring operational excellence.

As AI evolves with powerful Large Language Models (LLMs) and autonomous agents, deploying and managing these systems requires new approaches. This presentation explores the crucial intersection of MLOps and AI architecture, highlighting the shift toward scalable, observable, and secure AI deployments. We’ll examine key architectural considerations for integrating LLMs and agents into production, alongside evolving MLOps practices such as robust data pipelines, model monitoring, and continuous optimization.

Development teams often embrace Agile ways of working, yet the systems we build can still struggle to adapt when business needs shift. In this talk, we’ll share the journey of how a cross-functional data science team at the LEGO Group evolved its machine learning architecture to handle real-world complexity and change.

We’ll highlight how new modelling strategies, advanced feature engineering, and modern MLOps pipelines were designed not only for performance, but for flexibility. You’ll gain insight into how we architected a resilient ML system that supports changing requirements, scales with ease, and enables faster iteration. Expect actionable ideas on how to future-proof your own ML solutions and ensure they remain relevant in dynamic business contexts.

Powered by: Women in Data®

This talk explores the disconnect between MLOps fundamental principles and their practical application in designing, operating and maintaining machine learning pipelines. We’ll break down these principles, examine their influence on pipeline architecture, and conclude with a straightforward, vendor-agnostic mind-map, offering a roadmap to build resilient MLOps systems for any project or technology stack. Despite the surge in tools and platforms, many teams still struggle with the same underlying issues: brittle data dependencies, poor observability, unclear ownership, and pipelines that silently break once deployed. Architecture alone isn't the answer; systems thinking is.

Topics covered include:

- Modular design: feature, training, inference

- Built-in observability, versioning, reuse

- Orchestration across batch, real-time, LLMs

- Platform-agnostic patterns that scale