talk-data.com talk-data.com

Topic

Data Science

machine_learning statistics analytics

31

tagged

Activity Trend

68 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Brian O’Neill ×

Today, I’m responding to a listener's question about what it takes to succeed as a data or AI product manager, especially if you’re coming from roles like design/BI/data visualization, data science/engineering, or traditional software product management. This reader correctly observed that most of my content “seems more targeted at senior leadership” — and had asked if I could address this more IC-oriented topic on the show. I’ll break down why technical chops alone aren’t enough, and how user-centered thinking, business impact, and outcome-focused mindsets are key to real success — and where each of these prior roles brings strengths and/or weaknesses. I’ll also get into the evolving nature of PM roles in the age of AI, and what I think the super-powered AI product manager will look like.

Highlights/ Skip to:

Who can transition into an AI and data product management role? What does it take? (5:29) Software product managers moving into  AI product management (10:05) Designers moving into data/AI product management (13:32) Moving into the AI PM role from the engineering side (21:47) Why the challenge of user adoption and trust is often the blocker to the business value (29:56) Designing change management into AI/data products as a skill (31:26) The challenge of value creation vs. delivery work — and how incentives are aligned for ICs  (35:17) Quantifying the financial value of data and AI product work(40:23)

Quotes from Today’s Episode

“Who can transition into this type of role, and what is this role? I’m combining these two things. AI product management often seems closely tied to software companies that are primarily leveraging AI, or trying to, and therefore, they tend to utilize this AI product management role. I’m seeing less of that in internal data teams, where you tend to see data product management more, which, for me, feels like an umbrella term that may include traditional analytics work, data platforms, and often AI and machine learning. I’m going to frame this more in the AI space, primarily because I think AI tends to capture the end-to-end product than data product management does more frequently.” — Brian (2:55)

“There are three disciplines I’m going to talk about moving into this role. Coming into AI and data PM from design and UX, coming into it from data engineering (or just broadly technical spaces), and then coming into it from software product management. I think software product management and moving into the AI product management - as long as you’re not someone that has two years of experience, and then 18 years of repeating the second year of experience over and over again - and you’ve had a robust product management background across some different types of products; you can show that the domain doesn’t necessarily stop you from producing value. I think you will have the easiest time moving into AI product management because you’ve shown that you can adapt across different industries.” - Brian (9:45)

“Let’s talk about designers next. I’m going to include data visualization, user experience research, user experience design, product design, all those types of broad design, category roles. Moving into data and/or AI product management, first of all, you don’t see too many—I don’t hear about too many designers wanting to move into DPM roles, because oftentimes I don’t think there’s a lot of heavy UI and UX all the time in that space. Or at least the teams that are doing that work feel that’s somebody else’s job because they’re not doing end-to-end product thinking the way I talk about it, so therefore, a lot of times they don’t see the application, the user experience, the human adoption, the change management, they’re just not looking at the world that way, even though I think they should be.” - Brian (13:32)

“Coming at this from the data and engineering side, this is the classic track for data product management. At least that is the way I tend to see it. I believe most companies prefer to develop this role in-house. My biggest concern is that you end up with job title changes, but not necessarily the benefits that are supposed to come with this. I do like learning by doing, but having a coach and someone senior who can coach your other PMs is important because there’s a lot of information that you won’t necessarily get in a class or a course. It’s going to come from experience doing the work.” - Brian (22:26)

“This value piece is the most important thing, and I want to focus on that. This is something I frequently discuss in my training seminar: how do we attach financial value to the work we’re doing? This is both art and science, but it’s a language that anyone in a product management role needs to be comfortable with. If you’re finding it very hard to figure out how your data product contributes financial value because it’s based on this waterfalling of “We own the model, and it’s deployed on a platform.” The platform then powers these other things, which in turn power an application. How do we determine the value of our tool? These things are challenging, and if it’s challenging for you, guess how hard it will be for stakeholders downstream if you haven’t had the practice and the skills required to understand how to estimate value, both before we build something as well as after?” - Brian (31:51)

“If you don’t want to spend your time getting to know how your business makes money or creates value, then [AI and data product management work] is not for you. It’s just not. I would stay doing what you’re doing already or find a different thing because a lot of your time is going to be spent “managing up” for half the time, and then managing the product stuff “down.” Then, sitting in this middle layer, trying to explain to the business what’s going to come out and what the impact is going to be, in language that they care about and understand. You can't be talking about models, model accuracy, data pipelines, and all that stuff. They’re not going to care about any of that. - Brian (34:08)

After getting started in construction management, Anna Jacobson traded in the hard hat for the world of data products and operations at a VC company. Anna, who has a structural engineering undergrad and a masters in data science, is also a Founding Member of the Data Product Leadership Community (DPLC). However, her work with data products is more “accidental” and is just part of her responsibility at Operator Collective. Nonetheless, Anna had a lot to share about building data products, dashboards, and insights for users—including resistant ones! 

That resistance is precisely what I wanted to talk to her about in this episode: how does Anna get somebody to adopt a data product to which they may be apathetic, if not completely resistant?

At the end of the episode, Anna gives us a sneak peek at what she’s planning to talk about in our final 2024 live DPLC group discussion coming up on 12/18/2024.

We covered:

(1:17) Anna's background and how she got involved with data products (3:32) The ways Anna applied her experiences working in construction management to her current work with data products at a VC firm (5:32) Explaining one of the main data products she works on at Operator Collective (9:55) How Anna defines success for her data products (15:21) The process of designing data products for "non-believers" (21:08) How to think about "super users" and their feedback on a data product (27:11) How a company's cultural problems can be a blocker for product adoption (38:21) A preview of what you can expect from Anna's talk and live group discussion in the DPLC (40:24) Closing thoughts from Anna (42:54) Where you can find more from Anna

Quotes from Today’s Episode

“People working with data products are always thinking about how to [gain user adoption of their product]... I can’t think of a single one where [all users] were immediately on board. There’s a lot to unpack in what it takes to get non-believers on board, and it’s something that none of us ever get any training on. You just learn through experience, and it’s not something that most people took a class on in college. All of the social science around what we do gets really passed over for all the technical stuff. It takes thinking through and understanding where different [users] are coming from, and [understanding] that my perspective alone is not enough to make it happen.” - Anna Jacobson (16:00) ​​“If you only bring together the super users and don’t try to get feedback from the average user, you are missing the perspective of the person who isn’t passionate about the product. A non-believer is someone who is just over capacity. They may be very hard-working, they may be very smart, but they just don’t have the bandwidth for new things. That’s something that has to be overcome when you’re putting a new product into place.” - Anna Jacobson (22:35) “If a company can’t find budget to support [a data product], that’s a cultural decision. It’s not a financial decision. They find the money for the things that they care about. Solving the technology challenge is pretty easy, but you have to have a company that’s motivated to do that. If you want to implement something new, be it a data product or any change in an organization, identifying the cultural barriers and figuring out how to bring [people in an organization] on board is the crux of it. The money and the technology can be found.” - Anna Jacobson (27:58) “I think people are actually very bad at explaining what they want, and asking people what they want is not helpful. If you ask people what they want to do, then I think you have a shot at being able to build a product that does [what they want]. The executive sponsors typically have a very different perspective on what the product [should be] than the users do. If all of your information is getting filtered through the executive sponsor, you’re probably not getting the full picture” - Anna Jacobson (31:45) “You want to define what the opportunity is, the problem, the solution, and you want to talk about costs and benefits. You want to align [the data product] with corporate strategy, and those things are fairly easy to map out. But as you get down to the user, what they want to know is, ‘How is this going to make my life easier? How is this going to make [my job] faster? How is it going to result in better outcomes?’ They may have an interest in how it aligns with corporate strategy, but that’s not what’s going to motivate them. It’s really just easier, faster, better.” - Anna Jacobson (35:00)

Links Referenced LinkedIn: https://www.linkedin.com/in/anna-ching-jacobson/

DPLC (Data Product Leadership Community): https://designingforanalytics.com/community

R&D for materials-based products can be expensive, because improving a product’s materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the lab—away from a computer screen—so how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users?    

As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts.   

We covered:

(0:45) Explaining what Ori does at MaterialZone and who their product serves (2:28) How Ori and his team help make material science testing more efficient through their SAAS product (9:37) How they design a UX that can work across various scientific domains (14:08) How “doing product” at MaterialsZone matured over the past five years (17:01) Explaining the "Wizard of Oz" product development technique (21:09) The importance of integrating UX designers into the "Wizard of Oz" (23:52) The challenges MaterialZone faces when trying to get users to adopt to their product (32:42) Advice Ori would've given himself five years ago (33:53) Where you can find more from MaterialsZone and Ori

Quotes from Today’s Episode

“The fascinating thing about materials science is that you have this variety of domains, but all of these things follow the same process. One of the problems [consumer goods companies] face is that they have to do lengthy testing of their products. This is something you can use machine learning to shorten. [Product research] is an iterative process that typically takes a long time. Using your data effectively and using machine learning to predict what can happen, what’s better to try out, and what will reduce costs can accelerate time to market.” - Ori Yudilevich (3:47) “The difference [in time spent testing a product] can be up to 70% [i.e. you can run 70% fewer experiments using ML.]  That [also] means 70% less resources you’re using. Under the ‘old system’ of trial and error, you were just trying out a lot of things. The human mind cannot process a large number of parameters at once, so [a materials scientist] would just start playing only with [one parameter at a time]. You’ll have many experiments where you just try to optimize [for] one parameter, but then you might have 20, 30, or 100 more [to test]. Using machine learning, you can change a lot of parameters at once. The model can learn what has the most effect, what has a positive effect, and what has a negative effect. The differences can be really huge.” - Ori Yudilevich (5:50) “Once you go deeper into a use case, you see that there are a lot of differences. The types of raw materials, the data structure, the quantity of data, etc. For example, with batteries, you have lots of data because you can test hundreds all at once. Whereas with something like ceramics, you don’t try so many [experiments]. You just can’t. It’s much slower. You can’t do so many [experiments] in parallel. You have much less data. Your models are different, and your data structure is different. But there’s also quite a lot of commonality because you’re storing the data. In the end, you have each domain, some raw materials, formulations, tests that you’re doing, and different statistical plots that are very common.” - Ori Yudilvech (11:24) “We’ll typically do what we call the ‘Wizard of Oz’ technique. You simulate as if you have a feature, but you’re actually working for your client behind the scenes. You tell them [the simulated feature] is what you’re doing, but then measure [the client’s response] to understand if there’s any point in further developing that feature. Once you validate it, have enough data, and know where the feature is going, then you’ll start designing it and releasing it in incremental stages. We’ve made a lot of progress in how we discover opportunities and how we build something iteratively to make sure that we’re always going in the right direction” - Ori Yudilevich (15:56) “The main problem we’re encountering is changing the mindset of users. Our users are not people who sit in front of a computer. These are researchers who work in [a materials science] lab. The challenge [we have] is getting people to use the platform more. To see it’s worth [their time] to look at some insights, and run the machine learning models. We’re always looking for ways to make that transition faster… and I think the key is making [the user experience] just fun, easy, and intuitive.” - Ori Yudilevich (24:17) “Even if you make [the user experience] extremely smooth, if [users] don’t see what they get out of it, they’re still not going to [adopt your product] just for the sake of doing it. What we find is if this [product] can actually make them work faster or develop better products– that gets them interested. If you’re adopting these advanced tools, it makes you a better researcher and worker. People who [adopt those tools] grow faster. They become leaders in their team, and they slowly drag the others in.” - Ori Yudilevich (26:55) “Some of [MaterialsZone’s] most valuable employees are the people who have been users. Our product manager is a materials scientist. I’m not a material scientist, and it’s hard to imagine being that person in the lab. What I think is correct turns out to be completely wrong because I just don’t know what it’s like. Having [material scientists] who’ve made the transition to software and data science? You can’t replace that.” - Ori Yudilevich (31:32)

Links Referenced Website: https://www.materials.zone

LinkedIn: https://www.linkedin.com/in/oriyudilevich/

Email: [email protected]

Jeremy Forman joins us to open up about the hurdles– and successes that come with building data products for pharmaceutical companies. Although he’s new to Pfizer, Jeremy has years of experience leading data teams at organizations like Seagen and the Bill and Melinda Gates Foundation. He currently serves in a more specialized role in Pfizer’s R&D department, building AI and analytical data products for scientists and researchers. .

Jeremy gave us a good luck at his team makeup, and in particular, how his data product analysts and UX designers work with pharmaceutical scientists and domain experts to build data-driven solutions..  We talked a good deal about how and when UX design plays a role in Pfizer’s data products, including a GenAI-based application they recently launched internally.  

Highlights/ Skip to:

(1:26) Jeremy's background in analytics and transition into working for Pfizer (2:42) Building an effective AI analytics and data team for pharma R&D (5:20) How Pfizer finds data products managers (8:03) Jeremy's philosophy behind building data products and how he adapts it to Pfizer (12:32) The moment Jeremy heard a Pfizer end-user use product management research language and why it mattered (13:55) How Jeremy's technical team members work with UX designers (18:00) The challenges that come with producing data products in the medical field (23:02) How to justify spending the budget on UX design for data products (24:59) The results we've seen having UX design work on AI / GenAI products (25:53) What Jeremy learned at the  Bill & Melinda Gates Foundation with regards to UX and its impact on him now (28:22) Managing the "rough dance" between data science and UX (33:22) Breaking down Jeremy's GenAI application demo from CDIOQ (36:02) What would Jeremy prioritize right now if his team got additional funding (38:48) Advice Jeremy would have given himself 10 years ago (40:46) Where you can find more from Jeremy

Quotes from Today’s Episode

“We have stream-aligned squads focused on specific areas such as regulatory, safety and quality, or oncology research. That’s so we can create functional career pathing and limit context switching and fragmentation. They can become experts in their particular area and build a culture within that small team. It’s difficult to build good [pharma] data products. You need to understand the domain you’re supporting. You can’t take somebody with a financial background and put them in an Omics situation. It just doesn’t work. And we have a lot of the scars, and the failures to prove that.” - Jeremy Forman (4:12) “You have to have the product mindset to deliver the value and the promise of AI data analytics. I think small, independent, autonomous, empowered squads with a product leader is the only way that you can iterate fast enough with [pharma data products].” - Jeremy Forman (8:46) “The biggest challenge is when we say data products. It means a lot of different things to a lot of different people, and it’s difficult to articulate what a data product is. Is it a view in a database? Is it a table? Is it a query? We’re all talking about it in different terms, and nobody’s actually delivering data products.” - Jeremy Forman (10:53) “I think when we’re talking about [data products] there’s some type of data asset that has value to an end-user, versus a report or an algorithm. I think it’s even hard for UX people to really understand how to think about an actual data product. I think it’s hard for people to conceptualize, how do we do design around that? It’s one of the areas I think I’ve seen the biggest challenges, and I think some of the areas we’ve learned the most. If you build a data product, it’s not accurate, and people are getting results that are incomplete… people will abandon it quickly.” - Jeremy Forman (15:56) “ I think that UX design and AI development or data science work is a magical partnership, but they often don’t know how to work with each other. That’s been a challenge, but I think investing in that has been critical to us. Even though we’ve had struggles… I think we’ve also done a good job of understanding the [user] experience and impact that we want to have. The prototype we shared [at CDIOQ] is driven by user experience and trying to get information in the hands of the research organization to understand some portfolio types of decisions that have been made in the past. And it’s been really successful.” - Jeremy Forman (24:59) “If you’re having technology conversations with your business users, and you’re focused only the technology output, you’re just building reports. [After adopting If we’re having technology conversations with our business users and only focused on the technology output, we’re just building reports. [After we adopted  a human-centered design approach], it was talking [with end-users] about outcomes, value, and adoption. Having that resource transformed the conversation, and I felt like our quality went up. I felt like our output went down, but our impact went up. [End-users] loved the tools, and that wasn’t what was happening before… I credit a lot of that to the human-centered design team.” - Jeremy Forman (26:39) “When you’re thinking about automation through machine learning or building algorithms for [clinical trial analysis], it becomes a harder dance between data scientists and human-centered design. I think there’s a lack of appreciation and understanding of what UX can do. Human-centered design is an empathy-driven understanding of users’ experience, their work, their workflow, and the challenges they have. I don’t think there’s an appreciation of that skill set.” - Jeremy Forman (29:20) “Are people excited about it? Is there value? Are we hearing positive things? Do they want us to continue? That’s really how I’ve been judging success. Is it saving people time, and do they want to continue to use it? They want to continue to invest in it. They want to take their time as end-users, to help with testing, helping to refine it. Those are the indicators. We’re not generating revenue, so what does the adoption look like? Are people excited about it? Are they telling friends? Do they want more? When I hear that the ten people [who were initial users] are happy and that they think it should be rolled out to the whole broader audience, I think that’s a good sign.” - Jeremy Forman (35:19)

Links Referenced LinkedIn: https://www.linkedin.com/in/jeremy-forman-6b982710/

Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me. 

By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like. 

I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work!  

Highlights/ Skip to:

(10:47) Design != "front-end development" or analyst work (12:34)  Liking doing UI/UX/viz design work vs. knowing  (15:04)  When a leader sees lots of work being done, but the UX/design isn’t progressing (17:31) Your product’s UX needs to convey some magic IP/special sauce…but it isn’t (20:25) Understanding the tradeoffs of using libraries, templates, and other solution’s design as a foundation for your own  (25:28) The sunk cost bias associated with POCs and “we’ll iterate on it” (28:31) Relying on UI/UX "customization" to please all customers (31:26) The hidden costs of abstraction of system objects, UI components, etc.  to make life easier for engineering and technical teams (32:32) Believing you’ll know the design is good “when you see it” (and what you don’t know you don’t know) (36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for 

Quotes from Today’s Episode The challenge is often not knowing what you don’t know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it… but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but that’s not what the objective is. The objective is benefits creation. (5:47) When we start doing design on a project that’s not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19) While you don’t need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , “steal” and cheat from? And when does it not?  It takes skill to know when you should be breaking the rules or conventions. Shortcuts often don’t produce outsized results—unless you know what a good shortcut looks like.  (22:28) A proof of concept is not a minimum valuable product. There’s a difference between proving the tech can work and making it into a product that’s so valuable, someone would exchange money for it because it’s so useful to them. Whatever that value is, these are two different things. (26:40) Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If you’re coming into a crowded space… it’s like’y not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26) Watch for the sunk cost bias [in product development]. [Buyers] don’t care how the sausage was made. Many don’t understand how the AI stuff works, they probably don’t need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they can’t live without it.  Watch out for technically right, effectively wrong. (39:27)

In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! 

Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. 

Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-)  Ready?

Highlights/ Skip to:

(1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste  (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations  (30:09) Designing after the ML models have been trained—and it’s too late to go back  (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them   (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions 

Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value  of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33)

“Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?”  (47:26)

Guess what? Data science and AI initiatives are still failing here in 2024—despite widespread awareness. Is that news? Candidly, you’ll hear me share with Evan Shellshear—author of the new book Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics—about how much I actually didn’t want to talk about this story originally on my podcast—because it’s not news! However, what is news is what the data says behind Evan’s findings—and guess what? It’s not the technology.

In our chat, Evan shares why he wanted to leverage a human approach to understand the root cause of multiple organizations’ failures and how this approach highlighted the disconnect between data scientists and decision-makers. He explains the human factors at play, such as poor problem surfacing and organizational culture challenges—and how these human-centered design skills are rarely taught or offered to data scientists. The conversation delves into why these failures are more prevalent in data science compared to other fields, attributing it to the complexity and scale of data-related problems. We also discuss how analytically mature companies can mitigate these issues through strategic approaches and stakeholder buy-in. Join us as we delve into these critical insights for improving data science project outcomes.

Highlights/ Skip to:

(4:45) Why are data science projects still failing? (9:17) Why is the disconnect between data scientists and decision-makers so pronounced relative to, say, engineering?  (13:08) Why are data scientists not getting enough training for real-world problems? (16:18) What the data says about failure rates for  mature data teams vs. immature data teams (19:39) How to change people’s opinions so they value data more (25:16) What happens at the stage where the beneficiaries of data don’t actually see the benefits? (31:09) What are the skills needed to prevent a repeating pattern of creating data products that customers ignore?? (37:10) Where do more mature organizations find non-technical help to complement their data science and AI teams?  (41:44) Are executives and directors aware of the skills needed to level up their data science and AI  teams?

Quotes from Today’s Episode “People know this stuff. It’s not news anymore. And so, the reason why we needed this was really to dig in. And exactly like you did, like, keeping that list of articles is brilliant, and knowing what’s causing the failures and what’s leading to these issues still arising is really important. But at some point, we need to approach this in a scientific fashion, and we need to unpack this, and we need to really delve into the details beyond just the headlines and the articles themselves. And start collating and analyzing this to properly figure out what’s going wrong, and what do we need to do about it to fix it once and for all so you can stop your endless collection, and the AI Incident Database that now has over 3500 entries. It can hang its hat and say, ‘I’ve done my job. It’s time to move on. We’re not failing as we used to.’” - Evan Shellshear (3:01) "What we did is we took a number of different studies, and we split companies into what we saw as being analytically mature—and this is a common, well-known thing; there are many maturity frameworks exist across data, across AI, across all different areas—and what we call analytically immature, so those companies that probably aren’t there yet. And what we wanted to draw a distinction is okay, we say 80% of projects fail, or whatever the exact number is, but for who? And for what stage and for what capability? And so, what we then went and did is we were able to take our data and look at which failures are common for analytically immature organizations, and which failures are common for analytically mature organizations, and then we’re able to understand, okay, in the market, how many organizations do we think are analytically mature versus analytically immature, and then we were able to take that 80% failure rate and establish it. For analytically mature companies, the failure rate is probably more like 40%. For analytically immature companies, it’s over 90%, right? And so, you’re exactly right: organizations can do something about it, and they can build capabilities in to mitigate this. So definitely, it can be reduced. Definitely, it can be brought down. You might say, 40% is still too high, but it proves that by bringing in these procedures, you’re completely correct, that it can be reduced.” - Evan Shellshear (14:28) "What happens with the data science person, however, is typically they’re seen as a cost center—typically, not always; nowadays, that dialog is changing—and what they need to do is find partners across the other parts of the business. So, they’re going to go into the supply chain team, they’ll go into the merchandising team, they’ll go into the banking team, they’ll go into the other teams, and they’re going to find their supporters and winners there, and they’re going to probably build out from there. So, the first step would likely be, if you’re a big enough organization that you’re not having that strategy the executive level is to find your friends—and there will be some of the organization who support this data strategy—and get some wins for them.” - Evan Shellshear (24:38) “It’s not like there’s this box you put one in the other in. Because, like success and failure, there’s a continuum. And companies as they move along that continuum, just like you said, this year, we failed on the lack of executive buy-in, so let’s fix that problem. Next year, we fail on not having the right resources, so we fix that problem. And you move along that continuum, and you build it up. And at some point as you’re going on, that failure rate is dropping, and you’re getting towards that end of the scale where you’ve got those really capable companies that live, eat, and breathe data science and analytics, and so have to have these to be able to survive, otherwise a simple company evolution would have wiped them out, and they wouldn’t exist if they didn’t have that capability, if that’s their core thing.” - Evan Shellshear (18:56)

“Nothing else could be correct, right? This subjective intuition and all this stuff, it’s never going to be as good as the data. And so, what happens is, is you, often as a data scientist—and I’ve been subjected to this myself—come in with this arrogance, this kind of data-driven arrogance, right? And it’s not a good thing. It puts up barriers, it creates issues, it separates you from the people.” - Evan Shellshear (27:38) "Knowing that you’re going to have to go on that journey from day one, you can’t jump from level zero to level five. That’s what all these data maturity models are about, right? You can’t jump from level zero data maturity to level five overnight. You really need to take those steps and build it up.” - Evan Shellshear (45:21) "What we’re talking about, it’s not new. It’s just old wine in a new skin, and we’re just presenting it for the data science age." - Evan Shellshear (48:15)

Links Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, without the Hype: https://www.routledge.com/Why-Data-Science-Projects-Fail-the-Harsh-Realities-of-Implementing-AI-and-Analytics-without-the-Hype/Gray-Shellshear/p/book/9781032660301  LinkedIn: https://www.linkedin.com/in/eshellshear/  Get the Book: Get 20% off at Routledge.com w/ code dspf20   Get it at Amazon

Why do we still teach people to calculate? (People I Mostly Admire podcast)

Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome  Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!) 

Highlights/ Skip to:

(1:05) I introduce a hypothetical  internal LLM tool and what the goal of the tool is for the team who would use it  (5:31) Improving access to primary research findings for better UX  (10:19) What “quality data” means in a UX context (12:18) When LLM accuracy maybe doesn’t matter as much (14:03) How AI and LLMs are opening the door for fresh visioning work (15:38) Brian’s overall take on LLMs inside enterprise software as of right now (18:56) Final thoughts on UX design for LLMs, particularly in the enterprise (20:25) My inspiration for these 2 episodes—and how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their website

Quotes from Today’s Episode “If we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.” - Brian T. O’Neill (8:09) “What’s in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word quality mean from a product standpoint and a risk reduction one, as seen from an end-users’ perspective? Somebody who’s trying to get work done? This is a different type of quality measurement.” - Brian T. O’Neill (10:40)

“When we think about fact retrieval use cases in particular, how easily can product teams—internal or otherwise—and end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the model’s responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that we’re playing.” - Brian T. O’Neill (11:22) “As somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting started—the blank page—and this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where they’re doing truly generative or creative work—such that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.” - Brian T. O’Neill (13:31) “One thing I actually like about the hype, investment, and excitement around GenAI and LLMs in the enterprise is that there is an opportunity for organizations here to do some fresh visioning work. And this is a place that designers and user experience professionals can help data teams as we bring design into the AI space.” - Brian T. O’Neill (14:04)

“If there was ever a time to do some new visioning work, I think now is one of those times. However, we need highly skilled design leaders to help facilitate this in order for this to be effective. Part of that skill is knowing who to include in exercises like this, and my perspective, one of those people, for sure, should be somebody who understands the data science side as well, not just the engineering perspective. And as I posited in my seminar that I teach, the AI and analytical data product teams probably need a fourth member. It’s a quartet and not a trio. And that quartet includes a data expert, as well as that engineering lead.” - Brian T. O’Neill (14:38)

Links Perplexity.ai: https://perplexity.ai  Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586  My article that inspired this episode

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

In our chat, we covered:

Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)

Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc

Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)  

The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! 

According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. 

Highlights/ Skip to:

Malcolm’s definition of a data product (2:10) Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34) How product makers can gain access to users to build more successful products (11:36)  Answering the UX question to get past the adoption stage and provide business value (16:03) Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07) What people really mean by “data culture" (23:02) Malcolm’s data product journey and his changing perspective (32:05) Using empathy to provide a better UX in design and data (39:24) Avoiding the death of data science by becoming more product-driven (46:23) Where the majority of data professionals currently land on their view of product management for data products (48:15)

Quotes from Today’s Episode “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42) “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36)

“So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42)

“We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07)

“I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59)

“But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57)

“So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what we’re doing is a time waster?” - Malcolm Hawker (48:00)

Links Profisee: https://profisee.com/  LinkedIn: https://www.linkedin.com/in/malhawker/  CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/

Today I’m joined by Marnix van de Stolpe, Product Owner at Coolblue in the area of data science. Throughout our conversation, Marnix shares the story of how he joined a data science team that was developing a solution that was too focused on the delivery of a data-science metric that was not on track to solve a clear customer problem. We discuss how Marnix came to the difficult decision to throw out 18 months of data science work, what it was like to switch to a human-centered, product approach, and the challenges that came with it. Marnix shares the impact this decision had on his team and the stakeholders involved, as well as the impact on his personal career and the advice he would give to others who find themselves in the same position. Marnix is also a Founding Member of the Data Product Leadership Community and will be going much more into the details and his experience live on Zoom on November 16 @ 2pm ET for members.

Highlights/ Skip to:

I introduce Marnix, Product Owner at Coolblue and one of the original members of the Data Product Leadership Community (00:35) Marnix describes what Coolblue does and his role there (01:20) Why and how Marnix decided to throw away 18 months of machine learning work (02:51) How Marnix determined that the KPI (metric) being created wasn’t enough to deliver a valuable product (07:56) Marnix describes the conversation with his data science team on mapping the solution back to the desired outcome (11:57) What the culture is like at Coolblue now when developing data products (17:17) Marnix’s advice for data product managers who are coming into an environment where existing work is not tied to a desired outcome (18:43) Marnix and I discuss why data literacy is not the solution to making more impactful data products (21:00) The impact that Marnix’s human-centered approach to data product development has had on the stakeholders at Coolblue (24:54) Marnix shares the ultimate outcome of the product his team was developing to measure product returns (31:05) How you can get in touch with Marnix (33:45)

Links Coolblue: https://www.coolblue.nl LinkedIn: https://www.linkedin.com/in/marnixvdstolpe/

Today I’m joined by Anthony Deighton, General Manager of Data Products at Tamr. Throughout our conversation, Anthony unpacks his definition of a data product and we discuss whether or not he feels that Tamr itself is actually a data product. Anthony shares his views on why it’s so critical to focus on solving for customer needs and not simply the newest and shiniest technology. We also discuss the challenges that come with building a product that’s designed to facilitate the creation of better internal data products, as well as where we are in this new wave of data product management, and the evolution of the role.

Highlights/ Skip to:

I introduce Anthony, General Manager of Data Products at Tamr, and the topics we’ll be discussing today (00:37) Anthony shares his observations on how BI analytics are an inch deep and a mile wide due to the data that’s being input (02:31) Tamr’s focus on data products and how that reflects in Anthony’s recent job change from Chief Product Officer to General Manager of Data Products (04:35) Anthony’s definition of a data product (07:42) Anthony and I explore whether he feels that decision support is necessary for a data product (13:48) Whether or not Anthony feels that Tamr qualifies as a data product (17:08) Anthony speaks to the importance of focusing on outcomes and benefits as opposed to endlessly knitting together features and products (19:42) The challenges Anthony sees with metrics like Propensity to Churn (21:56) How Anthony thinks about design in a product like Tamr (30:43) Anthony shares how data science at Tamr is a tool in his toolkit and not viewed as a “fourth” leg of the product triad/stool (36:01) Anthony’s views on where we are in the evolution of the DPM role (41:25) What Anthony would do differently if he could start over at Tamr knowing what he knows now (43:43)

Links Tamr: https://www.tamr.com/ Innovating: https://www.amazon.com/Innovating-short-guide-making-things/dp/B0C8R79PVB The Mom Test: https://www.amazon.com/The-Mom-Test-Rob-Fitzpatrick-audiobook/dp/B07RJZKZ7F LinkedIn: https://www.linkedin.com/in/anthonydeighton/

Today I’m joined by Vera Liao, Principal Researcher at Microsoft. Vera is a part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group, and her research centers around the ethics, explainability, and interpretability of AI products. She is particularly focused on how designers design for explainability. Throughout our conversation, we focus on the importance of taking a human-centered approach to rendering model explainability within a UI, and why incorporating users during the design process informs the data science work and leads to better outcomes. Vera also shares some research on why example-based explanations tend to out-perform [model] feature-based explanations, and why traditional XAI methods LIME and SHAP aren’t the solution to every explainability problem a user may have.

Highlights/ Skip to:

I introduce Vera, who is Principal Researcher at Microsoft and whose research mainly focuses on the ethics, explainability, and interpretability of AI (00:35) Vera expands on her view that explainability should be at the core of ML applications (02:36) An example of the non-human approach to explainability that Vera is advocating against (05:35) Vera shares where practitioners can start the process of responsible AI (09:32) Why Vera advocates for doing qualitative research in tandem with model work in order to improve outcomes (13:51) I summarize the slides I saw in Vera’s deck on Human-Centered XAI and Vera expands on my understanding (16:06) Vera’s success criteria for explainability (19:45) The various applications of AI explainability that Vera has seen evolve over the years (21:52) Why Vera is a proponent of example-based explanations over model feature ones (26:15) Strategies Vera recommends for getting feedback from users to determine what the right explainability experience might be (32:07) The research trends Vera would most like to see technical practitioners apply to their work (36:47) Summary of the four-step process Vera outlines for Question-Driven XAI design (39:14)

Links “Human-Centered XAI: From Algorithms to User Experiences” Presentation “Human-Centered XAI: From Algorithms to User Experiences” Slide Deck  “Human-Centered AI Transparency in the Age of Large Language Models” MSR Microsoft Research Vera's Personal Website

Today I’m continuing my conversation with Nadiem von Heydebrand, CEO of Mindfuel. In the conclusion of this special 2-part episode, Nadiem and I discuss the role of a Data Product Manager in depth. Nadiem reveals which fields data product managers are currently coming from, and how a new data product manager with a non-technical background can set themselves up for success in this new role. He also walks through his portfolio approach to data product management, and how to prioritize use cases when taking on a data product management role. Toward the end, Nadiem also shares personal examples of how he’s employed these strategies, why he feels it’s so important for engineers to be able to see and understand the impact of their work, and best practices around developing a data product team. 

Highlights / Skip to:

Brian introduces Nadiem and gives context for why the conversation with Nadiem led to a two-part episode (00:35) Nadiem summarizes his thoughts on data product management and adds context on which fields he sees data product managers currently coming from (01:46) Nadiem’s take on whether job listings for data product manager roles still have too many technical requirements (04:27) Why some non-technical people fail when they transition to a data product manager role and the ways Nadiem feels they can bolster their chances of success (07:09) Brian and Nadiem talk about their views on functional data product team models and the process for developing a data product as a team (10:11) When Nadiem feels it makes sense to hire a data product manager and adopt a portfolio view of your data products (16:22) Nadiem’s view on how to prioritize projects as a new data product manager (19:48) Nadiem shares a story of when he took on an interim role as a head of data and how he employed the portfolio strategies he recommends (24:54) How Nadiem evaluates perceived usability of a data product when picking use cases (27:28) Nadiem explains why understanding go-to-market strategy is so critical as a data product manager (30:00) Brian and Nadiem discuss the importance of today’s engineering teams understanding the value and impact of their work (32:09) How Nadiem and his team came up with the idea to develop a SaaS product for data product managers (34:40)

Quotes from Today’s Episode “So, data product management [...] is a combination of different capabilities [...]  [including] product management, design, data science, and machine learning. We covered this in viability, desirability, feasibility, and datability. So, these are four dimensions [that] you combine [...] together to become a data product manager.” — Nadiem von Heydebrand (02:34)

“There is no education for data product management today, there’s no university degree. ... So, there’s nobody out there—from my perspective—who really has all the four dimensions from day one. It’s more like an evolution: you’re coming from one of the [parallel business] domains or from one of the [parallel business] fields and then you extend your skill set over time.” — Nadiem von Heydebrand (03:04)

“If a product manager has very good communication skills and is able to break down the needs in a proper way or in a good understandable way to its tech lead, or its engineering lead or data science lead, then I think it works out super well. If this bridge is missing, then it becomes a little bit tricky because then the distance between the product manager and the development team is too far.” – Nadiem von Heydebrand (09:10)

“I think every data leader out there has an Excel spreadsheet or a list of prioritized use cases or the most relevant use cases for the business strategy… You can think about this list as a portfolio. You know, some of these use cases are super valuable; some of these use cases maybe will not work out, and you have to identify those which are bringing real return on investment when you put effort in there.” – Nadiem von Heydebrand (19:01)

“I’m not a magician for data product management. I just focused on a very strategic view on my portfolio and tried to identify those cases and those data products where I can believe I can easily develop them, I have a high degree of adoption with my lines of business, and I can truly measure the added revenue and the impact.” – Nadiem von Heydebrand (26:31)

“As a true data product manager, from my point of view, you are someone who is empathetic for the lines of businesses, to understand what their underlying needs and what the problems are. At the same time, you are a business person. You try to optimize the portfolio for your own needs, because you have business goals coming from your leadership team, from your head of data, or even from the person above, the CTO, CIO, even CEO. So, you want to make sure that your value contribution is always transparent, and visible, measurable, tangible.” – Nadiem von Heydebrand (29:20)

“If we look into classical product management, I mean, the product manager has to understand how to market and how to go to the market. And it’s this exactly the same situation with data product managers within your organization. You are as successful as your product performs in the market. This is how you measure yourself as a data product manager. This is how you define success for yourself.” – Nadiem von Heydebrand (30:58)

Links Mindfuel: https://mindfuel.ai/ LinkedIn: https://www.linkedin.com/in/nadiemvh/ Delight Software - the SAAS tool for data product managers to manage their portfolio of data products: https://delight.mindfuel.ai

The conversation with my next guest was going so deep and so well…it became a two part episode! Today I’m chatting with Nadiem von Heydebrand, CEO of Mindfuel. Nadiem’s career journey led him from data science to data product management, and in this first, we will focus on the skills of data product management (DPM), including design. In part 2, we jump more into Nadiem’s take on the role of the DPM. Nadiem gives actionable insights into the realities of data product management, from the challenges of actually being able to talk to your end users, to focusing on the problems and unarticulated needs of your users rather than solutions. Nadiem and I also discuss how data product managers oversee a portfolio of initiatives, and why it’s important to view that portfolio as a series of investments. Nadiem also emphasizes the value of having designers on a data team, and why he hopes we see more designers in the industry. 

Highlights/ Skip to:

Brian introduces Nadiem and his background going from data science to data product management (00:36) Nadiem gives not only his definition of a data product, but also his related definitions of ‘data as product,’ ‘data as information,’ and ‘data as a model’ products (02:19) Nadiem outlines the skill set and activities he finds most valuable in a data product manager (05:15) How a data organization typically functions and the challenges a data team faces to prove their value (11:20) Brian and Nadiem discuss the challenges and realities of being able to do discovery with the end users of data products (17:42) Nadiem outlines how a portfolio of data initiatives has a certain investment attached to it and why it’s important to generate a good result from those investments (21:30) Why Nadiem wants to see more designers in the data product space and the problems designers solve for data teams (25:37) Nadiem shares a story about a time when he wished he had a designer to convert the expressed needs of the  business into the true need of the customer (30:10) The value of solving for the unarticulated needs of your product users, and Nadiem shares how focusing on problems rather than solutions helped him (32:32) Nadiem shares how you can connect with him and find out more about his company, Mindfuel (36:07)

Quotes from Today’s Episode “The product mindset already says it quite well. When you look into classical product management, you have something called the viability, the desirability, the feasibility—so these are three very classic dimensions of product management—and the fourth dimension, we at Mindfuel define for ourselves and for applications are, is the datability.” — Nadiem von Heydebrand (06:51)

“We can only prove our [data team’s] value if we unlock business opportunities in their [clients’] lines of businesses. So, our value contribution is indirect. And measuring indirect value contribution is very difficult in organizations.” — Nadiem von Heydebrand (11:57)

“Whenever we think about data and analytics, we put a lot of investment and efforts in the delivery piece. I saw a study once where it said 3% of investments go into discovery and 90% of investments go into delivery and the rest is operations and a little bit overhead and all around. So, we have to balance and we have to do proper discovery to understand what problem do we want to solve.” — Nadiem von Heydebrand (13:59)

“The best initiatives I delivered in my career, and also now within Mindfuel, are the ones where we try to build an end responsibility from the lines of businesses, among the product managers, to PO, the product owner, and then the delivery team.” – Nadiem von Heydebrand (17:00)

“As a consultant, I typically think in solutions. And when we founded Mindfuel, my co-founder forced me to avoid talking about the solution for an entire ten months. So, in whatever meeting we were sitting, I was not allowed to talk about the solution, but only about the problem space.”  – Nadiem von Heydebrand (34:12)

“In scaled organizations, data product managers, they typically run a portfolio of data products, and each single product can be seen a little bit like from an investment point of view, this is where we putting our money in, so that’s the reason why we also have to prioritize the right use cases or product initiatives because typically we have limited resources, either it is investment money, people, resources or our time.” – Nadiem von Heydebrand (24:02)

“Unfortunately, we don’t see enough designers in data organizations yet. So, I would love to have more design people around me in the data organizations, not only from a delivery perspective, having people building amazing dashboards, but also, like, truly helping me in this kind of discovery space.” – Nadiem von Heydebrand (26:28)

Links Mindfuel: https://mindfuel.ai/ Personal LinkedIn: https://www.linkedin.com/in/nadiemvh/ Mindfuel LinkedIn: https://www.linkedin.com/company/mindfuelai/

Today I’m chatting with Josh Noble, Principal User Researcher at TruEra. TruEra is working to improve AI quality by developing products that help data scientists and machine learning engineers improve their AI/ML models by combatting things like bias and improving explainability. Throughout our conversation, Josh—who also used to work as a Design Lead at IDEO.org—explains the unique challenges and importance of doing design and user research, even for technical users such as data scientists. He also shares tangible insights on what informs his product design strategy, the importance of measuring product success accurately, and the importance of understanding the current state of a solution when trying to improve it.

Highlights/ Skip to:

Josh introduces himself and explains why it’s important to do design and user research work for technical tools used by data scientists (00:43) The work that TruEra does to mitigate bias in AI as well as their broader focus on AI quality management (05:10) Josh describes how user roles informed TruEra’s design their upcoming monitoring product, and the emphasis he places on iterating with users (10:24)  How Josh approaches striking a balance between displaying extraneous information in the tools he designs vs. removing explainability (14:28) Josh explains how TruEra measures product success now and how they envision that changing in the future (17:59) The difference Josh sees between explainability and interpretability (26:56) How Josh decided to go from being a designer to getting a data science degree (31:08) Josh gives his take on what skills are most valuable as a designer and how to develop them (36:12)

Quotes from Today’s Episode “We want to make machine learning better by testing it, helping people analyze it, helping people monitor models. Bias and fairness is an important part of that, as is accuracy, as is explainability, and as is more broadly AI quality.” — Josh Noble (05:13)

“These two groups, the data scientists and the machine-learning engineer, they think quite differently about the problems that they need to solve. And they have very different toolsets. … Looking at how we can think about making a product and building tools that make sense to both of those different groups is a really important part of user experience.” – Josh Noble (09:04)

“I’m a big advocate for iterating with users. To the degree possible, get things in front of people so they can tell you whether it works for them or not, whether it fits their expectations or not.” – Josh Noble (12:15)

“Our goal is to get people to think about AI quality differently, not to necessarily change. We don’t want to change their performance metrics. We don’t want to make them change how they calculate something or change a workflow that works for them. We just want to get them to a place where they can bring together our four pillars and build better models and build better AI.” – Josh Noble (17:38)

“I’ve always wanted to know what was going on underneath the design. I think it’s an important part of designing anything to understand how the thing that you are making is actually built.” – Josh Noble (31:56)

“There’s a empathy-building exercise that comes from using these tools and understanding where they come from. I do understand the argument that some designers make. If you want to find a better way to do something, spending a ton of time in the trenches of the current way that it’s done is not always the solution, right?” – Josh Noble (36:12)

“There’s a real empathy that you build and understanding that you build from seeing how your designs are actually implemented that makes you a better teammate. It makes you a better collaborator and ultimately, I think, makes you a better designer because of that.” – Josh Noble (36:46)

“I would say to the non-designers who work with designers, measuring designs is not invalidating the designer. It doesn’t invalidate the craft of design. It shouldn’t be something that designers are hesitant to do. I think it’s really important to understand in a qualitative way what your design is doing and understand in a quantitative way what your design is doing.” – Josh Noble (38:18)

Links Truera: https://truera.com/ Medium: https://medium.com/@fctry2

Today I’m chatting with Bob Mason, Managing Partner at Argon Ventures. Bob is a VC who seeks out early-stage founders in the ML/AI space and helps them inform their go-to-market, product, and design strategies. In this episode, Bob reveals what he looks for in early-stage data and intelligence startups who are trying to leverage ML/AI. He goes on to explain why it’s important to identify what your strengths are and what you enjoy doing so you can surround yourself with the right team. Bob also shares valuable insight into how to earn trust with potential customers as an early-stage startup, how design impacts a product’s success, and his strategy for differentiating yourself and creating a valuable product outside of the ubiquitous “platform play.” 

Highlights/ Skip to:

Bob explains why and how Argon Ventures focuses their investments in intelligent industry companies (00:53) Brian and Bob discuss the importance of prioritizing go-to-market strategy over technology (03:42) How Bob views the career progression from data science to product management, and the ways in which his own career has paralleled that journey (07:21) The role customer adoption and user experience play for Bob and the companies he invests in, both pre-investment and post-investment (11:10) Brian and Bob discuss the design capabilities of different teams and why Bob feels it’s something leaders need to keep top of mind (15:25) Bob explains his recommendation to seek out quick wins for AI companies who can’t expect customers to wait for an ROI (19:09) The importance Bob sees in identifying early adopters during a sales cycle for early-stage startups (21:34) Bob describes how being customer-centric allows start-ups to build trust, garner quick wins, and inform their product strategy (23:42) Bob and Brian dive into Bob’s belief that solving intrinsic business problems by vertical increases a start-up’s chance of success substantially over “the platform play” (27:29) Bob gives insight into product trends he believes are going to be extremely impactful in the near future (29:05)

Quotes from Today’s Episode “In a former life, I was a software engineer, founder, and CTO myself, so I have to watch myself to not just geek out on the technology itself because the most important element when you’re determining if you want to move forward with investment or not, is this: is there a real problem here to be solved or is this technology in search of a problem?” — Bob Mason (01:51)

“User-centric research is really valuable, particularly at the earliest stages. If you’re just off by a degree or two, several years down the road, that can be a really material roadblock that you hit. And so, starting off on the right foot, I think is super, super valuable.” – Bob Mason (06:12)

“I don’t think the technical folks in an early-stage startup absolve themselves of not being really intimately involved with their go-to-market and who they’re ultimately creating value for.” – Bob Mason (07:07)

“When we’re making an investment decision, startups don’t generally have any customers, and so we don’t necessarily use the signal of long-term customer adoption as a driver for our initial investment decision. But it’s very much top of mind after investment and as we’re trying to build and bring the first version of the product to market. Being very thoughtful and mindful of sort of customer experience and long-term adoption is absolutely critical.” – Bob Mason (11:23)

“If you’re a scientist, the way you’re presenting both raw data and sort of summaries of data could be quite different than if you’re working with a business analyst that’s a few years out of college with a liberal arts degree. How you interpret results and then present those results, I think, is actually a very interesting design problem.” – Bob Mason (18:40)

“I think initially, a lot of early AI startups just kind of assumed that customers would be patient and let the system run, [waiting] 3, 6, 9, 12 months [to get this] magical ROI, and that’s just not how people (buyers) operate.” – Bob Mason (21:00)

“Re: platform plays: Obviously, you could still create a tremendous platform that’s very broad, but we think if you focus on the business problem of that particular vertical or domain, that actually creates a really powerful wedge so you can increase your value proposition. You could always increase the breadth of a platform over time. But if you’re not solving that intrinsic problem at the very beginning, you may never get the chance to survive.” – Bob Mason (28:24)

Links Argon Ventures: https://argon.vc/ LinkedIn: https://www.linkedin.com/in/robertmason/details/experience/ Email: [email protected]

Today I’m chatting with returning guest Tom Davenport, who is a Distinguished Professor at Babson College, a Visiting Professor at Oxford, a Research Fellow at MIT, and a Senior Advisor to Deloitte’s AI practice. He is also the author of three new books (!) on AI and in this episode, we’re discussing the role of product orientation in enterprise data science teams, the skills required, what he’s seeing in the wild in terms of teams adopting this approach, and the value it can create. Back in episode 26, Tom was a guest on my show and he gave the data science/analytics industry an approximate “2 out of 10” rating in terms of its ability to generate value with data. So, naturally, I asked him for an update on that rating, and he kindly obliged. How are you all doing? Listen in to find out!

Highlights / Skip to:

Tom provides an updated rating (between 1-10) as to how well he thinks data science and analytics teams are doing these days at creating economic value (00:44) Why Tom believes that “motivation is not enough for data science work” (03:06) Tom provides his definition of what data products are and some opinions on other industry definitions (04:22) How Tom views the rise of taking a product approach to data roles and why data products must be tied to value (07:55) Tom explains why he feels top down executive support is needed to drive a product orientation (11:51) Brian and Tom discuss how they feel companies should prioritize true data products versus more informal AI efforts (16:26) The trends Tom sees in the companies and teams that are implementing a data product orientation (19:18) Brian and Tom discuss the models they typically see for data teams and their key components (23:18) Tom explains the value and necessity of data product management (34:49) Tom describes his three new books (39:00)

Quotes from Today’s Episode “Data science in general, I think has been focused heavily on motivation to fit lines and curves to data points, and that particular motivation certainly isn’t enough in that even if you create a good model that fits the data, it doesn’t mean at all that is going to produce any economic value.” – Tom Davenport  (03:05)

“If data scientists don’t worry about deployment, then they’re not going to be in their jobs for terribly long because they’re not providing any value to their organizations.” – Tom Davenport (13:25)

“Product also means you got to market this thing if it’s going to be successful. You just can’t assume because it’s a brilliant algorithm with capturing a lot of area under the curve that it’s somehow going to be great for your company.” – Tom Davenport (19:04)

“[PM is] a hard thing, even for people in non-technical roles, because product management has always been a sort of ‘minister without portfolio’ sort of job, and you know, influence without formal authority, where you are responsible for a lot of things happening, but the people don’t report to you, generally.” – Tom Davenport (22:03)

“This collaboration between a human being making a decision and an AI system that might in some cases come up with a different decision but can’t explain itself, that’s a really tough thing to do [well].” – Tom Davenport (28:04)

“This idea that we’re going to use externally-sourced systems for ML is not likely to succeed in many cases because, you know, those vendors didn’t work closely with everybody in your organization” – Tom Davenport (30:21)

“I think it’s unlikely that [organizational gaps] are going to be successfully addressed by merging everybody together in one organization. I think that’s what product managers do is they try to address those gaps in the organization and develop a process that makes coordination at least possible, if not true, all the time.” – Tom Davenport (36:49)

Links Tom’s LinkedIn: https://www.linkedin.com/in/davenporttom/ Tom’s Twitter: https://twitter.com/tdav All-in On AI by Thomas Davenport & Nitin Mittal, 2023 Working With AI by Thomas Davenport & Stephen Miller, 2022 Advanced Introduction to AI in Healthcare by Thomas Davenport, John Glaser, & Elizabeth Gardner, 2022 Competing On Analytics by Thomas Davenport & Jeanne G. Harris, 2007

Today I’m discussing something we’ve been talking about a lot on the podcast recently - the definition of a “data product.” While my definition is still a work in progress, I think it’s worth putting out into the world at this point to get more feedback. In addition to sharing my definition of data products (as defined the “producty way”), on today’s episode definition, I also discuss some of the non-technical skills that data product managers (DPMs) in the ML and AI space need if they want to achieve good user adoption of their solutions. I’ll also share my thoughts on whether data scientists can make good data product managers, what a DPM can do to better understand your users and stakeholders, and how product and UX design factors into this role. 

Highlights/ Skip to:

I introduce my reasons for sharing my definition of a data product (0:46) My definition of data product (7:26) Thinking the “producty” way (8:14) My thoughts on necessary skills for data PMs (in particular, AI & machine learning product management) (12:21) How data scientists can become good data product managers (DPMs) by taking off the data science hat (13:42) Understanding the role of UX design within the context of DPM (16:37) Crafting your sales and marketing strategies to emphasize the value of your product to the people who can use or purchase it (23:07) How to build a team that will help you increase adoption of your data product (30:01) How to build relationships with stakeholders/customers that allow you to find the right solutions for them (33:47) Letting go of a technical identity to develop a new identity as a DPM who can lead a team to build a product that actually gets used (36:32)

Quotes from Today’s Episode “This is what’s missing in some of the other definitions that I see around data products  [...] they’re not talking about it from the customer of the data product lens. And that orientation sums up all of the work that I’m doing and trying to get you to do as well, which is to put the people at the center of the work that you’re doing and not the data science, engineering, tech, or design. I want you to put the people at the center.” (6:12) “A data product is a data-driven, end-to-end, human-in-the-loop decision support solution that’s so valuable, users would potentially pay to use it.” (7:26) “I want to plunge all the way in and say, ‘if you want to do this kind of work, then you need to be thinking the product-y way.’ And this means inherently letting go of some of the data science-y way of thinking and the data-first kinds of ways of thinking.” (11:46) “I’ve read in a few places that data scientists don’t make for good data product managers. [While it may be true that they’re more introverted,] I don’t think that necessarily means that there’s an inherent problem with data scientists becoming good data product managers. I think the main challenge will be—and this is the same thing for almost any career transitioning into product management—is knowing when to let go of your former identity and wear the right hat at the right time.” (14:24) “Make better things for people that will improve their life and their outcomes and the business value will follow if you’ve properly aligned those two things together.” (17:21) “The big message here is this: there is always a design and experience, whether it is an API, or a platform, a dashboard, a full application, etc. Since there are no null design choices, how much are you going to intentionally shape that UX, or just pray that it comes out good on the other end? Prayer is not really a reliable strategy.  If you want to routinely do this work right, you need to put intention behind it.” (22:33)  “Relationship building is a must, and this is where applying user experience research can be very useful—not just for users, but also with stakeholders. It’s learning how to ask really good questions and learning the feelings, emotions, and reasons why people ask your team to build the thing that they’ve asked for. Learning how to dig into that is really important.” (26:26)

Links Designing for Analytics Community Work With Me Email Record a question

Today I’m chatting with Eugenio Zuccarelli, Research Scientist at MIT Media Lab and Manager of Data Science at CVS. Eugenio explains how he has created multiple algorithms designed to help shape decisions made in life or death situations, such as pediatric cardiac surgery and during the COVID-19 pandemic. Eugenio shared the lessons he’s learned on how to build trust in data when the stakes are life and death. Listen and learn how culture can affect adoption of decision support and ML tools, the impact delivery of information has on the user's ability to understand and use data, and why Eugenio feels that design is more important than the inner workings of ML algorithms.

Highlights/ Skip to:

Eugenio explains why he decided to work on machine learning models for cardiologists and healthcare workers involved in the COVID-19 pandemic (01:53)  The workflow surgeons would use when incorporating the predictive algorithm and application Eugenio helped develop (04:12) The question Eugenio’s predictive algorithm helps surgeons answer when evaluating whether to use various pediatric cardiac surgical procedures (06:37) The path Eugenio took to build trust with experienced surgeons and drive product adoption and the role of UX (09:42) Eugenio’s approach to identifying key problems and finding solutions using data (14:50) How Eugenio has tracked value delivery and adoption success for a tool that relies on more than just accurate data & predictions, but also surgical skill and patient case complexity (22:26) The design process Eugenio started early on to optimize user experience and adoption (28:40) Eugenio’s key takeaways from a different project that helped government agencies predict what resources would be needed in which areas during the COVID-19 pandemic (34:45)

Quotes from Today’s Episode “So many people today are developing machine-learning models, but I truly find the most difficult parts to be basically everything around machine learning … culture, people, stakeholders, products, and so on.” — Eugenio Zuccarelli (01:56)

“Developing machine-learning components, clean data, developing the machine-learning pipeline, those were the easy steps. The difficult ones who are gaining trust, as you said, developing something that was useful. And talking about trust, it’s especially tricky in the healthcare industry.” — Eugenio Zuccarelli (10:42)

“Because this tennis match, this ping-pong match between what can be done and what’s [the] problem [...] thankfully, we know, of course, it is not really the route to go. We don’t want to develop technology for the sake of it.” — Eugenio Zuccarelli (14:49)

“We put so much effort on the machine-learning side and then the user experience is so key, it’s probably even more important than the inner workings.” — Eugenio Zuccarelli (29:22)

“It was interesting to see exactly how the doctor is really focused on their job and doing it as well as they can, not really too interested in fancy [...] solutions, and so we were really able to not focus too much on appearance or fancy components, but more on usability and readability.” — Eugenio Zuccarelli (33:45)

“People’s ability to trust data, and how this varies from a lot of different entities, organizations, countries, [etc.] This really makes everything tricky. And of course, when you have a pandemic, this acts as a catalyst and enhances all of these cultural components.” — Eugenio Zuccarelli (35:59)

“I think [design success] boils down to delivery. You can package the same information in different ways [so that] it actually answers their questions in the ways that they’re familiar with.” — Eugenio Zuccarelli (37:42)

Links LinkedIn: https://www.linkedin.com/in/jayzuccarelli Twitter: twitter.com/jayzuccarelli Personal website: https://eugeniozuccarelli.com Medium: jayzuccarelli.medium.com