talk-data.com talk-data.com

B

Speaker

Ben Johnson

3

talks

Co-Founder & CTO Uptitude

Physics degree & over 15 years of experience working with data across missile defence, pharmaceuticals, and enterprise organizations. He specialises in digital transformation, low-code development, and integrating AI into enterprise systems - helping organisations move faster, smarter, and with purpose. Ben has led data strategy across supply chains, R&D, compliance, tech and marketing - working to embed AI into core business functions. Alongside his work at Uptitude, Ben contributes to conversations on technology governance, advising UK Parliament on AI policy and the future of responsible innovation. He believes the organisations that harness AI ethically and effectively will drive the next wave of economic and societal progress.

Bio from: Big Data LDN 2025

Filter by Event / Source

Talks & appearances

3 activities · Newest first

Search activities →

As AI adoption accelerates across industries, many organisations are realising that building a model is only the beginning. Real-world deployment of AI demands robust infrastructure, clean and connected data, and secure, scalable MLOps pipelines. In this panel, experts from across the AI ecosystem share lessons from the frontlines of operationalising AI at scale.

We’ll dig into the tough questions:

• What are the biggest blockers to AI adoption in large enterprises — and how can we overcome them?

• Why does bad data still derail even the most advanced models, and how can we fix the data quality gap?

• Where does synthetic data fit into real-world AI pipelines — and how do we define “real” data?

• Is Agentic AI the next evolution, or just noise — and how should MLOps prepare?

• What does a modern, secure AI stack look like when using external partners and APIs?

Expect sharp perspectives on data integration, model lifecycle management, and the cyber-physical infrastructure needed to make AI more than just a POC.

On today’s episode, we’re joined by Ben Johnson Founder, CEO of Particle41, a provider of software and product development solutions crafted by world-class app development, DevOps, and data science teams. We talk about:

What components the CTO owns in a SaaS companyOptimizing the efficiency of dev teamsHow much of the CTO role is internal vs. externalHow to interview & identify a great CTO candidate

Summary The first stage in every data project is collecting information and routing it to a storage system for later analysis. For operational data this typically means collecting log messages and system metrics. Often a different tool is used for each class of data, increasing the overall complexity and number of moving parts. The engineers at Timber.io decided to build a new tool in the form of Vector that allows for processing both of these data types in a single framework that is reliable and performant. In this episode Ben Johnson and Luke Steensen explain how the project got started, how it compares to other tools in this space, and how you can get involved in making it even better.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, and Data Council. Upcoming events include the O’Reilly AI conference, the Strata Data conference, the combined events of the Data Architecture Summit and Graphorum, and Data Council in Barcelona. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Ben Johnson and Luke Steensen about Vector, a high-performance, open-source observability data router

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what the Vector project is and your reason for creating it?

What are some of the comparable tools that are available and what were they lacking that prompted you to start a new project?

What strategy are you using for project governance and sustainability? What are the main use cases that Vector enables? Can you explain how Vector is implemented and how the system design has evolved since you began working on it?

How did your experience building the business and products for Timber influence and inform your work on Vector? When you were planning the implementation, what were your criteria for the runtime implementation and why did you decide to use Rust? What led you to choose Lua as the embedded scripting environment?

What data format does Vector use internally?

Is there any support for defining and enforcing schemas?

In the event of a malformed message is there any capacity for a dead letter queue?

What are some strategies for formatting source data to improve the effectiveness of the information that is gathered and the ability of Vector to parse it into useful data? When designing an event flow in Vector what are the available mechanisms for testing the overall delivery and any transformations? What options are available to operators to support visibility into the running system? In terms of deployment topologies, what ca