talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.

To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. 

Highlights/ Skip to:

Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and  adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11)  Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04)

Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve.

People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.

And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.

What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.

Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.

And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience.

__

Want people to pay for your product? Solve an observable problem—not a vague information or data problem. What do I mean?

“When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.

Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.

If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.

Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.

Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.

And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory."

__

One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product changes—or you’re making improvements that might make the product worth paying for at all, worth paying more for, or easier to buy.

"It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be.

Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work. When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous.

That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions.

And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing."

__

Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you. 

"People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself.

Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal.

Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them. That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses.

Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted.

And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours."

__

Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel.

There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.” 

Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short.

"In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.

Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.

A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.

What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.

When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.

And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."

Links

Listen to part one: Episode 182  Schedule a Design-Eyes Assessment with me and get clarity, now.

When a team of insurance brokers receives more than 500 emails per day from clients, it quickly becomes difficult to keep things organized and make sense of it all. That’s where Libero comes in: a solution that summarizes and classifies all incoming emails. Beyond that, Libero also categorizes them properly within the client database (CRM). All of this, which brokers and their assistants used to do manually, is now fully automated, freeing up a tremendous amount of time every day. Through this presentation, we want to bring you into the heart of Libero’s design and the key decisions made during its development: its architecture, the challenges, the specific requirements of the insurance industry, the solution’s evolution, and more. And most importantly, we want to open up a discussion with you, the community, around this question: - How do we build and deploy AI solutions that can actually be maintained and evolve over time? This question is more important now than ever, with the rapid evolution of AI solutions, products and services that hit the market each week. The rhythm of innovation (and sometime fluff…) is astonishing! How do we continue building solutions that stay relevant and keep delivering business value? As a service provider, iuvo-ai is constantly balancing innovation with pragmatism. Every client has a different level of technical maturity, infrastructure, and internal talent. In that reality, the real challenge isn’t just getting a solution to work; it’s making sure it can live on. How do we design architectures that are flexible enough to evolve as the ecosystem changes, but simple enough for our clients to own and maintain? How do we make decisions that reduce friction when the next API version drops or when the internal IT team needs to take over? Those are the questions we wrestle with every day when bringing AI into production, and we’d love to exchange ideas and lessons learned with you 🙂

Currently, Sinsay — a brand of LPP — is in a period of rapid expansion. Every new Sinsay store starts with a question: Where should we open next? In this talk, I’ll share how our team at Silky Coders uses data, maps, and machine learning to answer that question across Poland and other European countries. Discover how our application supports business decision-making.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here. •⁠ Linear ⁠ — ⁠ The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear. — As software engineers, what should we know about writing secure code? Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases. We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense. If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints. — Timestamps (00:00) Intro (02:31) What is penetration testing? (06:23) Who owns code security: devs or security teams? (14:42) What is code security?  (17:10) Code security basics for devs (21:35) Advanced security challenges (24:36) SCA testing  (25:26) The CVE Program  (29:39) The State of Code Security report  (32:02) Code quality vs security (35:20) Dev machines as a security vulnerability (37:29) Common security tools (42:50) Dynamic security tools (45:01) AI security reviews: what are the limits? (47:51) AI-generated code risks (49:21) More code: more vulnerabilities (51:44) AI’s impact on code security (58:32) Common misconceptions of the security industry (1:03:05) When is security “good enough?” (1:05:40) Johannes’s favorite programming language — The Pragmatic Engineer deepdives relevant for this episode: • What is Security Engineering? •⁠ Mishandled security vulnerability in Next.js •⁠ Okta Schooled on Its Security Practices — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

In this episode, Ciro Greco (Co-founder & CEO, Bauplan) joins me to discuss why the future of data infrastructure must be "Code-First" and how this philosophy accidentally created the perfect environment for AI Agents.

We explore why the "Modern Data Stack" isn't ready for autonomous agents and why a programmable lakehouse is the solution. Ciro explains that while we trust agents to write code (because we can roll it back), allowing them to write data requires strict safety rails.

He breaks down how Bauplan uses "Git for Data" semantics - branching, isolation, and transactionality - to provide an air-gapped sandbox where agents can safely operate without corrupting production data. Welcome to the future of the lakehouse.

Bauplan: https://www.bauplanlabs.com/

Send us a text Hit replay on one of the most thought-provoking Agentic AI conversations on Making Data Simple. GTM Account Director Megan Gallagher makes the case for Agentic AI from the Maven AGI front lines, where AI agents stop following rigid decision trees and start acting with real autonomy over enterprise workflows. “We’re still living like everything is deterministic,” Megan argues, “but this new generation of agents is inherently generative and predictive.” In this replay, she unpacks what that shift means for smaller specialized models, using real enterprise data, rethinking “assistant vs person,” and how to get started without boiling the ocean. If you want to understand how Agentic AI moves from slideware to shipped value, this is the episode to queue up again.

01:30 All Great Podcasts start with Drinks05:27 Maven AGI 09:13 Smaller Models! 10:50 Why Maven AGI12:04 The Secret Sauce or Use Case15:13 Typical Client Persona 20:31 Using Enterprise Data 26:19 But AGI, Really?30:12 Assistant or Person?39:06 What's Next?40:28 My Thoughts on Getting Started?46:30 The AI Example49:30 The Maven AGI Pitch53:23 Learning

Maven AGI: https://www.mavenagi.com/  Megan's LinkedIn: https://www.linkedin.com/in/megfgallagher/ Al's LinkedIn: https://www.linkedin.com/in/al-martin-ku/

AgenticAI #FutureOfAI #MakingDataSimple #MavenAGI #AIAgents #EnterpriseAI #CustomerExperience #AIInProduction #PodcastReplay​

Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Power BI for Finance

Build effective data models and reports in Power BI for financial planning, budgeting, and valuations with practical templates, logic, and step-by-step guidance. Free with your book: DRM-free PDF version + access to Packt's next-gen Reader Key Features Engineer optimal star schema data models for financial planning and analysis Implement common financial logic, calendars, and variance calculations Create dynamic, formatted reports for income statements, balance sheets, and cash flow Purchase of the print or Kindle book includes a free PDF eBook Book Description Martin Kratky brings his global experience of over 20 years as co-founder of Managility and creator of Acterys to empower CFOs and accountants with Power BI for Finance through this hands-on guide to streamlining and enhancing financial processes. Starting with the foundation of every effective BI solution, a well-designed data model, the book shows you how to structure star schemas and integrate common financial data sources like ERP and accounting systems. You’ll then learn to implement key financial logic using DAX and M, covering calendars, KPIs, and variance calculations. The book offers practical advice on creating clear and compliant financial reports, such as income statements, balance sheets, and cash flows with visual design and formatting best practices. With dedicated chapters on advanced workflows, you’ll learn how to handle multi-currency setups, perform group consolidations, and implement planning models like rolling forecasts, annual budgets, and sales and operations planning (S&OP). As you advance, you’ll gain insights from real-world case studies covering company valuations, Excel integration, and the use of write-back methods with Dynamics Business Performance Planning and Acterys. The concluding chapters highlight how AI and Copilot enhance financial analytics. Email sign-up and proof of purchase required What you will learn Apply multi-currency handling and group consolidation techniques in Power BI Model discounted cash flow and company valuation scenarios Design and manage write-back workflows with Dynamics BPP and Acterys Integrate Excel and Power BI using live connections and cube formulas Utilize AI, Copilot, and LLMs to enhance automation and insight generation Create complete finance-focused dashboards for sales and operations planning Who this book is for This book is for finance professionals including CFOs, FP&A managers, controllers, and certified accountants who want to enhance reporting, planning, and forecasting using Power BI. Basic familiarity with Power BI and financial concepts is recommended to get the most out of this hands-on guide.

Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! Can You Pass This Data Analyst Interview? 🖥️ Build your own app with Replit: https://replit.com/refer/AveryData 👔 Mock Interview Platform: https://interviewsimulator.io 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator ⌚ TIMESTAMPS 00:28 — Question 1 02:54 — Question 2 04:53 — Question 3 08:08 — Where to do Interview Practice 08:54 — How to Build Cool Apps Like This  🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Today, we’re joined Danny Tomsett is CEO at UneeQ, the leader in digital human technology. We talk about:  The growth of AI brand ambassadorsThe greatest irony of professional corporate life: How we train Barriers to adopting digital human trainingHow simulations can help develop soft skills and dealing with high stress situationsThe problems that needed to be solved to create incredible human-like experiences

Are Vision-Language Models Ready for Physical AI? Humans easily understand how objects move, rotate, and shift while current AI models that connect vision and language still make mistakes in what seem like simple situations: deciding “left” versus “right” when something is moving, recognizing how perspective changes, or keeping track of motion over time. To reveal these kinds of limitations, we created VLM4D, a testing suite made up of real-world and synthetic videos, each paired with questions about motion, rotation, perspective, and continuity. When we put modern vision-language models through these challenges, they performed far below human levels, especially when visual cues must be combined or the sequence of events must be maintained. But there is hope: new methods such as reconstructing visual features in 4D and fine-tuning focused on space and time show noticeable improvement, bringing us closer to AI that truly understands a dynamic physical world.

Are Vision-Language Models Ready for Physical AI? Humans easily understand how objects move, rotate, and shift while current AI models that connect vision and language still make mistakes in what seem like simple situations: deciding left versus right when something is moving, recognizing how perspective changes, or keeping track of motion over time. To reveal these kinds of limitations, we created VLM4D, a testing suite made up of real-world and synthetic videos, each paired with questions about motion, rotation, perspective, and continuity. When we put modern vision-language models through these challenges, they performed far below human levels, especially when visual cues must be combined or the sequence of events must be maintained. But there is hope: new methods such as reconstructing visual features in 4D and fine-tuning focused on space and time show noticeable improvement, bringing us closer to AI that truly understands a dynamic physical world.

Are Vision-Language Models Ready for Physical AI? Humans easily understand how objects move, rotate, and shift while current AI models that connect vision and language still make mistakes in what seem like simple situations: deciding “left” versus “right” when something is moving, recognizing how perspective changes, or keeping track of motion over time. To reveal these kinds of limitations, we created VLM4D, a testing suite made up of real-world and synthetic videos, each paired with questions about motion, rotation, perspective, and continuity. When we put modern vision-language models through these challenges, they performed far below human levels, especially when visual cues must be combined or the sequence of events must be maintained. But there is hope: new methods such as reconstructing visual features in 4D and fine-tuning focused on space and time show noticeable improvement, bringing us closer to AI that truly understands a dynamic physical world.