talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences.

After spending significant time on the forefront of AI’s breakthroughs, Stuart believes many of the products we’re seeing today are the result of FOMO above all else. He shares a belief that I’ve emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt’s 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users’ needs.

Highlights/ Skip to 

Why Stuart was asked to speak to the House of Lords about AI (2:04) The LLM-powered products has Stuart been building recently (4:20) Finding product-market fit with AI products (7:44) Lessons Stuart has learned over the past two years working with LLM-power products (10:54)  Figuring out how to build user trust in your AI products (14:40) The differences between being a digital product manager vs. AI product manager (18:13) Who is best suited for an AI product management role (25:42) Why Stuart thinks user experience matters greatly with AI products (32:18) The formula needed to create a business-viable AI product (38:22)  Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53) Conversations that need to be had with academics and data scientists when building AI-powered products (54:04) Final thoughts from Stuart and where you can find more from him (58:07)

Quotes from Today’s Episode

“I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We’re finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn’t expect large language models to be used for fact-checking… they’re actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There’s a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58) “[LLM-powered products] gave me the wow factor, and I think that’s part of what’s caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we’re probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We’re in this rush and [these products] are [based on] FOMO. We’re leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It’s not. It’s another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What’s the context of the problem? What’s the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We’re putting the cart before the horse.” - Stuart Winter-Tear (11:11) “[LLM-powered products] feel most amazing when you’re not a domain expert in whatever you’re using it for. I’ll give you an example: I’m terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it’s actually most powerful in the hands of experts who do understand the domain they’re using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01) “We’re so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it’s the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you’re dealing with cutting-edge products, you’ve got these two timelines that you’re trying to put together, and the data research one is very unpredictable. It’s the nature of research. We don’t necessarily know when we’re going to get to where we want to be.” - Stuart Winter-Tear (22:25) “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world.  I’m against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they’re dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)

How can data science accelerate the energy transition? In this session, UK Power Networks’ Data Science team presents real-world tools driving a smarter, more flexible electricity grid on the path to Net Zero. From democratising access to grid insights to automating decision-making for clean energy, this talk highlights how applied AI and analytics are transforming infrastructure at scale, with lessons for any data professional tackling high-impact, real-world problems.

UK Power Networks will share how they extracted data from century-old handwritten and printed archives. Using FME coupled with AI, UKPN was able to automate a task turning a multi-year effort into days. See how All-Data, Any-AI Integration can be used to unlock new insights, drive efficiency, and accelerate innovation.

Join us to learn how the Schwarz Group, the parent company of Lidl and Kaufland and the 4th largest retailer worldwide, leverages containerized Strategy in STACKIT for sustainable growth of their data and AI infrastructure. Over a decade-long partnership, Schwarz has utilized sovereign data and cloud services to maintain a competitive edge. Discover their use of Strategy, the last independent BI tool, with open data formats to avoid vendor lock-in. Get insights into delivering open and sovereign cloud solutions for their business lines and the EU, ensuring data independence and scalability.

Disconnected applications slow operations, impacting everything from customer support to case resolution. Eliminating outdated processes is essential for peak performance.

After extensive market research and comparing all of the biggest players, Enate chose SnapLogic to integrate disparate systems, eliminating manual errors and freeing resources for strategic work.

With a unified view, SnapLogic's AI capabilities support Enate’s Business Orchestration Automation Technology (BOAT), helping service providers operate efficiently, reduce complexity, and deliver exceptional customer results.

There is no value from AI without a Data Strategy. AI hallucinations are a significant risk in delivering ROI across the enterprise. Stardog’s knowledge graph-powered agentic architecture delivers an AI-ready data foundation with a semantic layer that provides facts and grounding needed to eliminate hallucinations. Learn why traditional Retrieval-Augmented Generation and straight Text-to-SQL approaches can be insufficient and how you can broaden AI's access to diverse and dense data and ensure timely, secure, and, most importantly, hallucination-free answers from your own data.

Today, we’re joined by Tom Lavery, CEO and Founder of Jiminny, a conversation intelligence platform that captures and analyzes your critical go-to-market insights with AI. We talk about:  Getting value from unstructured dataHow quickly SaaS subscription businesses should push to be profitableTrade-offs between product-led and sales-led growthRacing to be the market leaderDangers of focusing strictly on the short-term

As organizations race to harness agentic AI, the challenge is not just prototyping but picking the right use cases and confidently deploying them. Discover how DataRobot is helping organizations like BMW, DHL and VidaCaixa on their journey, and learn the key considerations for developing, deploying, and governing agentic AI:

- Picking the right tools and techniques: Knowing what frameworks and models to use when.
- Streamlining AI Infrastructure: How to efficiently manage and scale your AI infrastructure.
- Examples demonstrating agentic AI in action.

As enterprises race to unlock AI, many face barriers like poor metadata and weak governance. In this session, Rebecca O’Kill (CDAO of Axis Capital), Tim Gasper, and Juan Sequeda share how AI is not just the outcome of governance—it’s the incentive. Framing AI as the “carrot” motivates adoption of governance as a strategic enabler. Learn how AI-powered governance, data marketplaces, and knowledge graphs together provide context, drive smarter metadata, and enable impactful AI use cases like underwriting agents that require structured and unstructured data.

As data becomes the lifeblood of modern enterprises, protecting it must be seamless, scalable and strategic. Discover how leading organisations use automation, tokenisation and AI to embed privacy from ingestion to sharing. Learn how built-in protection boosts breach resilience, automation cuts compliance costs, and controlled sharing enables monetisation. Real-world use cases show how AI and policy-based controls make privacy a driver—not a blocker—of faster, smarter decisions.

A heightened need for trust in our data, its lineage, and use, has propelled governance of master data, in particular, from a nice-to-have to a critical foundational pillar of D&A and AI initiatives. This session covers key considerations across people, process and technology in running a successful MDM program that delivers tangible business value and sustainable governance.

Implementing AI governance can be challenging, navigating this in the public sector is particularly so with risk focus often overshadowing value and benefits. This roundtable will explore practices in overcoming challenges to achieve success in AI Governance with specific focus on public sector aspects including data, regulations and workforce.

CDAOs and AI leaders often struggle to get started with GenAI. Attend this session to understand the first critical components you need to build or buy: Data, AI Engineering tools, a search and retrieval system, the application, and the right types of models. With these building blocks, you can build several working GenAI prototypes to help you prove the value and justify further investments.

While AI is popular in the media, surveys show about half of organizations struggle with basic data science, fearing they will fall behind. This fear can hinder progress, especially in certain industries. However, they don't have to choose between AI and foundational data science. By combining predictive and prescriptive analytics, organizations can leverage the best of both worlds to create better solutions.

To support its Digital First mission, the BBC is transforming into a data product organisation. This session will explore how the BBC's data strategy is driving a cultural and organisational shift that is evolving its data architecture and embedding data capabilities company wide. Discover the BBC's approach to developing certified, shareable data products that strengthen governance, enable self-service analytics, and establish a foundation for responsible AI use.

Five years ago, Rolls-Royce had no dedicated data science capabilities. Today, over 7,000 users—from coders to citizen data scientists—actively leverage AI. This transformation extends beyond technology, emphasizing AI democratization, a data-driven culture, and responsible scaling. This session explores key strategies for enterprise-wide adoption, from use case ideation to realisation of significant AI value.