talk-data.com talk-data.com

Topic

Agile/Scrum

project_management software_development methodology

5

tagged

Activity Trend

163 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) ×

In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! 

Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. 

Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-)  Ready?

Highlights/ Skip to:

(1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste  (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations  (30:09) Designing after the ML models have been trained—and it’s too late to go back  (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them   (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions 

Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value  of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33)

“Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?”  (47:26)

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

In our chat, we covered:

Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)

Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc

Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)  

The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Today I’m chatting with Dr. Sebastian Klapdor, Chief Data Officer for Vista. Sebastian has developed and grown a successful Data Product Management team at Vista, and it all began with selling his vision to the rest of the executive leadership. In this episode, Sebastian explains what that process was like and what he learned. Sebastian shares valuable insights on how he implemented a data product orientation at Vista, what makes a good data product manager, and why technology usage isn’t the only metric that matters when measuring success. He also shares what he would do differently if he had to do it all over again.

Highlights/ Skip to:

How Sebastian defines a data product (01:48) Brian asks Sebastian about the change management process in leadership when implementing a data product approach (07:40) The three dimensions that Sebastian and his team measure to determine adoption success (10:22) Sebastian shares the financial results of Vista adopting a data product approach (12:56) The size and scale of the data team at Vista, and how their different roles ensure success (14:30) Sebastian explains how Vista created and grew a team of 35 data product managers (16:47) The skills Sebastian feels data product managers need to be successful at Vista (22:02) Sebastian describes what he would do differently if he had to implement a data product approach at a company again (29:46)

Quotes from Today’s Episode “You need to establish a culture, and that’s often the hardest part that takes the longest -  to treat data as an asset, and not to treat it as a byproduct, but to treat it as a product and treat it as a valuable thing.” – Sebastian Klapdor (07:56)

“One source of data product managers is taking data professionals. So, you take data engineers, data scientists, or former analysts, and develop them into the role by coaching them [through] the product management skills from the software industry.” – Sebastian Klapdor (17:39)

“We went out there and we were hiring people in the market who were experienced [Product Managers]. But we also see internal people, actually grooming and growing into all of these roles, both from these 80 folks who have been around before, but also from other areas of Vista.” – Sebastian Klapdor (20:28)

“[Being a good Product Manager] comes back to the good old classics of collaborating, of being empathetic to where other people are at, their priorities, and understanding where [our] priorities fit into their bigger piece, and jointly aligning on what is valuable for Vista.” – Sebastian Klapdor (22:27)

“I think there’s nothing more detrimental than saying, ‘Yeah, sure, we can deliver things, and with data, it can do everything.’ And then you disappoint people and you don’t stick to your promises. … If you don’t stick to your promise, it will hurt you.” – Sebastian Klapdor (23:04)

“You don’t do the typical waterfall approach of solving business problems with data. You don’t do the approach that a data scientist tries to get some data, builds a model, and hands it over to data engineer who should productionize that. And then the data engineer gets back and says certain features can’t be productionized because it’s very complex to get the data on a daily basis, or in real time. By doing [this work] in a data product team, you can work actually in Agile and you’re super fast building what we call a minimum lovable product.” – Sebastian Klapdor (26:15)

“That was the biggest learning … whom do we staff as data product managers? And what do we expect of a good data product manager? How does a career path look like? That took us a really long time to figure out.” – Sebastian Klapdor (30:18)

“We have a big, big, big commitment that we want to start stuffing UX designers onto our [data] product teams.” - Sebastian Klapdor (21:12)

Links Vista: https://vista.io LinkedIn: https://www.linkedin.com/in/sebastianklapdor/ Vista Blog: https://vista.io/blog

Episode Description In one of my past memos to my list subscribers, I addressed some questions about agile and data products. Today, I expound on each of these and share some observations from my consulting work. In some enterprise orgs, mostly outside of the software industry, agile is still new and perceived as a panacea. In reality, it can just become a factory for shipping features and outputs faster–with positive outcomes and business value being mostly absent. To increase the adoption of enterprise data products that have humans in the loop, it’s great to have agility in mind, but poor technology shipped faster isn’t going to serve your customers any better than what you’re doing now. 

Here are the 10 reflections I’ll dive into on this episode: 

You can't project manage your way out of a [data] product problem. 

The more you try to deploy agile at scale, take the trainings, and hire special "agilists", the more you're going to tend to measure success by how well you followed the Agile process.

Agile is great for software engineering, but nobody really wants "software engineering" given to them. They do care about the perceived reality of your data product.

Run from anyone that tells you that you shouldn't ever do any design, user research, or UX work "up front" because "that is waterfall." 

Everybody else is also doing modified scrum (or modified _).

Marty Cagan talks about this a lot, but in short: while the PM (product managers) may own the backlog and priorities, what’s more important is that these PMs “own the problem” space as opposed to owning features or being solution-centered. 

Before Agile can thrive, you will need strong senior leadership buy-in if you're going to do outcome-driven data product work.

There's a huge promise in the word "agile." You've been warned. 

If you don't have a plan for how you'll do discovery work, defining clear problem sets and success metrics, and understanding customers feelings, pains, needs, and wants, and the like, Agile won't deliver much improvement for data products (probably).

Getting comfortable with shipping half-right, half-quality, half-done is hard. 

Quotes from Today’s Episode  “You can get lost in following the process and thinking that as long as we do that, we’re going to end up with a great data product at the end.” - Brian (3:16) “The other way to define clear success criteria for data products and hold yourself accountable to those on the user and business side is to really understand what does a positive outcome look like? How would we measure it?” - Brian (5:26) “The most important thing is to know that the user experience is the perceived reality of the technology that you built. Their experience is the only reality that matters.” - Brian (9:22) “Do the right amount of planning work upfront, have a strategy in place, make sure the team understands it collectively, and then you can do the engineering using agile.” - Brian (18:15) “If you don’t have a plan for how you’ll do discovery work, defining clear problem sets and success metrics, and understanding customers’ feelings, pains, needs, wants, and all of that, then agile will not deliver increased adoption of your data products. - Brian (36:07)

Links: designingforanalytics.com: https://designingforanalytics.com designingforanalytics.com/list: https://designingforanalytics.com/list

Today I’m talking about how to measure data product value from a user experience and business lens, and where leaders sometimes get it wrong. Today’s first question was asked at my recent talk at the Data Summit conference where an attendee asked how UX design fits into agile data product development. Additionally, I recently had a subscriber to my Insights mailing list ask about how to measure adoption, utilization, and satisfaction of data products. So, we’ll jump into that juicy topic as well.

Answering these inquiries also got me on a related tangent about the UX challenges associated with abstracting your platform to support multiple, but often theoretical, user needs—and the importance of collaboration to ensure your whole team is operating from the same set of assumptions or definitions about success. I conclude the episode with the concept of “game framing” as a way to conceptualize these ideas at a high level. 

Key topics and cues in this episode include: 

An overview of the questions I received (:45) Measuring change once you’ve established a benchmark (7:45)  The challenges of working in abstractions (abstracting your platform to facilitate theoretical future user needs) (10:48) The value of having shared definitions and understanding the needs of different stakeholders/users/customers (14:36) The importance of starting from the “last mile” (19:59) The difference between success metrics and progress metrics (24:31) How measuring feelings can be critical to measuring success (29:27) “Game framing” as a way to understand tracking progress and success (31:22)

Quotes from Today’s Episode “Once you’ve got your benchmark in place for a data product, it’s going to be much easier to measure what the change is because you’ll know where you’re starting from.” - Brian (7:45)

“When you’re deploying technology that’s supposed to improve people’s lives so that you can get some promise of business value downstream, this is not a generic exercise. You have to go out and do the work to understand the status quo and what the pain is right now from the user's perspective.” - Brian (8:46)

“That user perspective—perception even—is all that matters if you want to get to business value. The user experience is the perceived quality, usability, and utility of the data product.” - Brian (13:07)

“A data product leader’s job should be to own the problem and not just the delivery of data product features, applications or technology outputs. ” - Brian (26:13)

“What are we keeping score of? Different stakeholders are playing different games so it’s really important for the data product team not to impose their scoring system (definition of success) onto the customers, or the users, or the stakeholders.” - Brian (32:05)

“We always want to abstract once we have a really good understanding of what people do, as it’s easier to create more user-centered abstractions that will actually answer real data questions later on. ” - Brian (33:34)

Links https://designingforanalytics.com/community