talk-data.com talk-data.com

Event

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

2022-02-08 – 2025-11-27 Podcasts Visit website ↗

Activities tracked

63

Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?

While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?

If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?

My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.

Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.

I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.

Hashtag: #ExperiencingData.

JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed

ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/

Filtering by: AI/ML ×

Sessions & talks

Showing 26–50 of 63 · Newest first

Search within this event →

152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product

2024-09-17 Listen
podcast_episode

In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! 

Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. 

Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-)  Ready?

Highlights/ Skip to:

(1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste  (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations  (30:09) Designing after the ML models have been trained—and it’s too late to go back  (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them   (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions 

Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value  of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33)

“Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?”  (47:26)

150 - How Specialized LLMs Can Help Enterprises Deliver Better GenAI User Experiences with Mark Ramsey

2024-08-29 Listen
podcast_episode
Mark Ramsey (Ramsey International) , Brian O’Neill (Designing for Analytics)

“Last week was a great year in GenAI,” jokes Mark Ramsey—and it’s a great philosophy to have as LLM tools especially continue to evolve at such a rapid rate. This week, you’ll get to hear my fun and insightful chat with Mark from Ramsey International about the world of large language models (LLMs) and how we make useful UXs out of them in the enterprise. 

Mark shared some fascinating insights about using a company’s website information (data) as a place to pilot a LLM project, avoiding privacy landmines, and how re-ranking of models leads to better LLM response accuracy. We also talked about the importance of real human testing to ensure LLM chatbots and AI tools truly delight users. From amusing anecdotes about the spinning beach ball on macOS to envisioning a future where AI-driven chat interfaces outshine traditional BI tools, this episode is packed with forward-looking ideas and a touch of humor.

Highlights/ Skip to:

(0:50) Why is the world of GenAI evolving so fast? (4:20) How Mark thinks about UX in an LLM application (8:11) How Mark defines “Specialized GenAI?” (12:42) Mark’s consulting work with GenAI / LLMs these days (17:29) How GenAI can help the healthcare industry (30:23) Uncovering users’ true feelings about LLM applications (35:02) Are UIs moving backwards as models progress forward? (40:53) How will GenAI impact data and analytics teams? (44:51) Will LLMs be able to consistently leverage RAG and produce proper SQL? (51:04) Where can find more from Mark and Ramsey International

Quotes from Today’s Episode “With [GenAI], we have a solution that we’ve built to try to help organizations, and build workflows. We have a workflow that we can run and ask the same question [to a variety of GenAI models] and see how similar the answers are. Depending on the complexity of the question, you can see a lot of variability between the models… [and] we can also run the same question against the different versions of the model and see how it’s improved. Folks want a human-like experience interacting with these models.. [and] if the model can start responding in just a few seconds, that gives you much more of a conversational type of experience.” - Mark Ramsey (2:38) “[People] don’t understand when you interact [with GenAI tools] and it brings tokens back in that streaming fashion, you’re actually seeing inside the brain of the model. Every token it produces is then displayed on the screen, and it gives you that typewriter experience back in the day. If someone has to wait, and all you’re seeing is a logo spinning, from a UX experience standpoint… people feel like the model is much faster if it just starts to produce those results in that streaming fashion. I think in a design, it’s extremely important to take advantage of that [...] as opposed to waiting to the end and delivering the results some models support that, and other models don’t.”- Mark Ramsey (4:35) "All of the data that’s on the website is public information. We’ve done work with several organizations on quickly taking the data that’s on their website, packaging it up into a vector database, and making that be the source for questions that their customers can ask. [Organizations] publish a lot of information on their websites, but people really struggle to get to it. We’ve seen a lot of interest in vectorizing website data, making it available, and having a chat interface for the customer. The customer can ask questions, and it will take them directly to the answer, and then they can use the website as the source information.” - Mark Ramsey (14:04) “I’m not skeptical at all. I’ve changed much of my [AI chatbot searches] to Perplexity, and I think it’s doing a pretty fantastic job overall in terms of quality. It’s returning an answer with citations, so you have a sense of where it’s sourcing the information from. I think it’s important from a user experience perspective. This is a replacement for broken search, as I really don’t want to read all the web pages and PDFs you have that might be about my chiropractic care query to answer my actual [healthcare] question.” - Brian O’Neill (19:22)

“We’ve all had great experience with customer service, and we’ve all had situations where the customer service was quite poor, and we’re going to have that same thing as we begin to [release more] chatbots. We need to make sure we try to alleviate having those bad experiences, and have an exit. If someone is running into a situation where they’d rather talk to a live person, have that ability to route them to someone else. That’s why the robustness of the model is extremely important in the implementation… and right now, organizations like OpenAI and Anthropic are significantly better at that [human-like] experience.” - Mark Ramsey (23:46) "There’s two aspects of these models: the training aspect and then using the model to answer questions. I recommend to organizations to always augment their content and don’t just use the training data. You’ll still get that human-like experience that’s built into the model, but you’ll eliminate the hallucinations. If you have a model that has been set up correctly, you shouldn’t have to ask questions in a funky way to get answers.” - Mark Ramsey (39:11) “People need to understand GenAI is not a predictive algorithm. It is not able to run predictions, it struggles with some math, so that is not the focus for these models. What’s interesting is that you can use the model as a step to get you [the answers]. A lot of the models now support functions… when you ask a question about something that is in a database, it actually uses its knowledge about the schema of the database. It can build the query, run the query to get the data back, and then once it has the data, it can reformat the data into something that is a good response back." - Mark Ramsey (42:02)

Links Mark on LinkedIn Ramsey International Email: mark [at] ramsey.international Ramsey International's YouTube Channel

149 - What the Data Says About Why So Many Data Science and AI Initiatives Are Still Failing to Produce Value with Evan Shellshear

2024-08-06 Listen
podcast_episode

Guess what? Data science and AI initiatives are still failing here in 2024—despite widespread awareness. Is that news? Candidly, you’ll hear me share with Evan Shellshear—author of the new book Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics—about how much I actually didn’t want to talk about this story originally on my podcast—because it’s not news! However, what is news is what the data says behind Evan’s findings—and guess what? It’s not the technology.

In our chat, Evan shares why he wanted to leverage a human approach to understand the root cause of multiple organizations’ failures and how this approach highlighted the disconnect between data scientists and decision-makers. He explains the human factors at play, such as poor problem surfacing and organizational culture challenges—and how these human-centered design skills are rarely taught or offered to data scientists. The conversation delves into why these failures are more prevalent in data science compared to other fields, attributing it to the complexity and scale of data-related problems. We also discuss how analytically mature companies can mitigate these issues through strategic approaches and stakeholder buy-in. Join us as we delve into these critical insights for improving data science project outcomes.

Highlights/ Skip to:

(4:45) Why are data science projects still failing? (9:17) Why is the disconnect between data scientists and decision-makers so pronounced relative to, say, engineering?  (13:08) Why are data scientists not getting enough training for real-world problems? (16:18) What the data says about failure rates for  mature data teams vs. immature data teams (19:39) How to change people’s opinions so they value data more (25:16) What happens at the stage where the beneficiaries of data don’t actually see the benefits? (31:09) What are the skills needed to prevent a repeating pattern of creating data products that customers ignore?? (37:10) Where do more mature organizations find non-technical help to complement their data science and AI teams?  (41:44) Are executives and directors aware of the skills needed to level up their data science and AI  teams?

Quotes from Today’s Episode “People know this stuff. It’s not news anymore. And so, the reason why we needed this was really to dig in. And exactly like you did, like, keeping that list of articles is brilliant, and knowing what’s causing the failures and what’s leading to these issues still arising is really important. But at some point, we need to approach this in a scientific fashion, and we need to unpack this, and we need to really delve into the details beyond just the headlines and the articles themselves. And start collating and analyzing this to properly figure out what’s going wrong, and what do we need to do about it to fix it once and for all so you can stop your endless collection, and the AI Incident Database that now has over 3500 entries. It can hang its hat and say, ‘I’ve done my job. It’s time to move on. We’re not failing as we used to.’” - Evan Shellshear (3:01) "What we did is we took a number of different studies, and we split companies into what we saw as being analytically mature—and this is a common, well-known thing; there are many maturity frameworks exist across data, across AI, across all different areas—and what we call analytically immature, so those companies that probably aren’t there yet. And what we wanted to draw a distinction is okay, we say 80% of projects fail, or whatever the exact number is, but for who? And for what stage and for what capability? And so, what we then went and did is we were able to take our data and look at which failures are common for analytically immature organizations, and which failures are common for analytically mature organizations, and then we’re able to understand, okay, in the market, how many organizations do we think are analytically mature versus analytically immature, and then we were able to take that 80% failure rate and establish it. For analytically mature companies, the failure rate is probably more like 40%. For analytically immature companies, it’s over 90%, right? And so, you’re exactly right: organizations can do something about it, and they can build capabilities in to mitigate this. So definitely, it can be reduced. Definitely, it can be brought down. You might say, 40% is still too high, but it proves that by bringing in these procedures, you’re completely correct, that it can be reduced.” - Evan Shellshear (14:28) "What happens with the data science person, however, is typically they’re seen as a cost center—typically, not always; nowadays, that dialog is changing—and what they need to do is find partners across the other parts of the business. So, they’re going to go into the supply chain team, they’ll go into the merchandising team, they’ll go into the banking team, they’ll go into the other teams, and they’re going to find their supporters and winners there, and they’re going to probably build out from there. So, the first step would likely be, if you’re a big enough organization that you’re not having that strategy the executive level is to find your friends—and there will be some of the organization who support this data strategy—and get some wins for them.” - Evan Shellshear (24:38) “It’s not like there’s this box you put one in the other in. Because, like success and failure, there’s a continuum. And companies as they move along that continuum, just like you said, this year, we failed on the lack of executive buy-in, so let’s fix that problem. Next year, we fail on not having the right resources, so we fix that problem. And you move along that continuum, and you build it up. And at some point as you’re going on, that failure rate is dropping, and you’re getting towards that end of the scale where you’ve got those really capable companies that live, eat, and breathe data science and analytics, and so have to have these to be able to survive, otherwise a simple company evolution would have wiped them out, and they wouldn’t exist if they didn’t have that capability, if that’s their core thing.” - Evan Shellshear (18:56)

“Nothing else could be correct, right? This subjective intuition and all this stuff, it’s never going to be as good as the data. And so, what happens is, is you, often as a data scientist—and I’ve been subjected to this myself—come in with this arrogance, this kind of data-driven arrogance, right? And it’s not a good thing. It puts up barriers, it creates issues, it separates you from the people.” - Evan Shellshear (27:38) "Knowing that you’re going to have to go on that journey from day one, you can’t jump from level zero to level five. That’s what all these data maturity models are about, right? You can’t jump from level zero data maturity to level five overnight. You really need to take those steps and build it up.” - Evan Shellshear (45:21) "What we’re talking about, it’s not new. It’s just old wine in a new skin, and we’re just presenting it for the data science age." - Evan Shellshear (48:15)

Links Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, without the Hype: https://www.routledge.com/Why-Data-Science-Projects-Fail-the-Harsh-Realities-of-Implementing-AI-and-Analytics-without-the-Hype/Gray-Shellshear/p/book/9781032660301  LinkedIn: https://www.linkedin.com/in/eshellshear/  Get the Book: Get 20% off at Routledge.com w/ code dspf20   Get it at Amazon

Why do we still teach people to calculate? (People I Mostly Admire podcast)

148 - LLMs need UX: How to Increase Your B2B Product’s Value with AI (Part 2)

2024-07-23 Listen
podcast_episode

Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome  Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!) 

Highlights/ Skip to:

(1:05) I introduce a hypothetical  internal LLM tool and what the goal of the tool is for the team who would use it  (5:31) Improving access to primary research findings for better UX  (10:19) What “quality data” means in a UX context (12:18) When LLM accuracy maybe doesn’t matter as much (14:03) How AI and LLMs are opening the door for fresh visioning work (15:38) Brian’s overall take on LLMs inside enterprise software as of right now (18:56) Final thoughts on UX design for LLMs, particularly in the enterprise (20:25) My inspiration for these 2 episodes—and how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their website

Quotes from Today’s Episode “If we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.” - Brian T. O’Neill (8:09) “What’s in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word quality mean from a product standpoint and a risk reduction one, as seen from an end-users’ perspective? Somebody who’s trying to get work done? This is a different type of quality measurement.” - Brian T. O’Neill (10:40)

“When we think about fact retrieval use cases in particular, how easily can product teams—internal or otherwise—and end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the model’s responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that we’re playing.” - Brian T. O’Neill (11:22) “As somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting started—the blank page—and this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where they’re doing truly generative or creative work—such that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.” - Brian T. O’Neill (13:31) “One thing I actually like about the hype, investment, and excitement around GenAI and LLMs in the enterprise is that there is an opportunity for organizations here to do some fresh visioning work. And this is a place that designers and user experience professionals can help data teams as we bring design into the AI space.” - Brian T. O’Neill (14:04)

“If there was ever a time to do some new visioning work, I think now is one of those times. However, we need highly skilled design leaders to help facilitate this in order for this to be effective. Part of that skill is knowing who to include in exercises like this, and my perspective, one of those people, for sure, should be somebody who understands the data science side as well, not just the engineering perspective. And as I posited in my seminar that I teach, the AI and analytical data product teams probably need a fourth member. It’s a quartet and not a trio. And that quartet includes a data expert, as well as that engineering lead.” - Brian T. O’Neill (14:38)

Links Perplexity.ai: https://perplexity.ai  Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586  My article that inspired this episode

147 - LLMs need UX: How to Increase Your B2B Product’s Value with AI (Part 1)

2024-07-10 Listen
podcast_episode

Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks. 

I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.” 

Highlights/ Skip to:

(1:15) Currently, many LLM feature  initiatives seem to mostly driven by FOMO  (2:45) UX Considerations for LLM-enhanced enterprise applications  (5:14) Challenges with LLM UIs / user interfaces (7:24) Measuring improvement in UX outcomes with LLMs (10:36) Accuracy in LLMs and its relevance in enterprise software  (11:28) Illustrating key consideration for implementing an LLM-based feature (19:00) Leadership and context in AI deployment (19:27) Determining UX benchmarks for using LLMs (20:14) The dynamic nature of LLM hallucinations and how we design for the unknown (21:16) Closing thoughts on Part 1 of designing for AI and LLMs

Quotes from Today’s Episode

“While many product teams continue to race to deploy some sort of GenAI and especially LLMs into their products—particularly this is in the tech sector for commercial software companies—the general sense I’m getting is that this is still more about FOMO than anything else.” - Brian T. O’Neill (2:07) “No matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.” - Brian T. O’Neill (3:03)

“So, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while we’ve given users tremendous flexibility in the form of a Google search-like interface, we’ve also in many cases, limited the UX of these interactions to a text conversation with a machine. We’re back to the CLI in some ways.” - Brian T. O’Neill (5:14) “Before and after we insert an LLM into a user’s workflow, we need to know what an improvement in their life or work actually means.”- Brian T. O’Neill (7:24) "If it would take the machine a few seconds to process a result versus what might take a day for a worker, what’s the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if you’re concerned about adoption, which a lot of data product leaders are." - Brian T. O’Neill (10:17)

“So, there’s no right or wrong answer here. These are all range questions, and they’re leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much of the time.” - Brian T. O’Neill (19:00)

Links

View Part 1 of my article on UI/UX design considerations for LLMs in enterprise applications:  https://designingforanalytics.com/resources/ui-ux-design-for-enterprise-llms-use-cases-and-considerations-for-data-and-product-leaders-in-2024-part-1/

146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman

2024-06-25 Listen
podcast_episode
Brian T. O’Neill , Ben Shneiderman (University of Maryland)

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

In our chat, we covered:

Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)

Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc

Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)  

The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee

2024-06-11 Listen
podcast_episode

Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! 

According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. 

Highlights/ Skip to:

Malcolm’s definition of a data product (2:10) Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34) How product makers can gain access to users to build more successful products (11:36)  Answering the UX question to get past the adoption stage and provide business value (16:03) Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07) What people really mean by “data culture" (23:02) Malcolm’s data product journey and his changing perspective (32:05) Using empathy to provide a better UX in design and data (39:24) Avoiding the death of data science by becoming more product-driven (46:23) Where the majority of data professionals currently land on their view of product management for data products (48:15)

Quotes from Today’s Episode “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42) “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36)

“So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42)

“We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07)

“I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59)

“But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57)

“So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what we’re doing is a time waster?” - Malcolm Hawker (48:00)

Links Profisee: https://profisee.com/  LinkedIn: https://www.linkedin.com/in/malhawker/  CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/

144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode)

2024-05-28 Listen
podcast_episode
Shashank Garg (Infocepts) , Brian O’Neill (Designing for Analytics)

Welcome to another curated, Promoted Episode of Experiencing Data! 

In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflows—and as I often do, I started out by asking Shashank how he and Infocepts define the term “data product.” We discuss a few real-world applications Infocepts has built, and the measurable impact of these data products—as well as some of the challenges they’ve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI.

Highlights/ Skip to

Shashank gives his definition of data products  (01:24) We tackle the challenges of user adoption in data products (04:29) We discuss the crucial role of integrating actionable insights into data products for enhanced decision-making (05:47) Shashank shares insights on the evolution of data products from concept to practical integration (10:35) We explore the challenges and strategies in designing user-centric data products (12:30) I ask Shashank about typical environments and challenges when starting new data product consultations (15:57) Shashank explains how Infocepts incorporates AI into their data solutions (18:55) We discuss the importance of understanding user personas and engaging with actual users (25:06) Shashank describes the roles involved in data product development’s ideation and brainstorming stages (32:20) The issue of proxy users not truly representing end-users in data product design is examined (35:47) We consider how organizations are adopting a product-oriented approach to their data strategies (39:48) Shashank and I delve into the implications of GenAI and other AI technologies on product orientation and user adoption (43:47) Closing thoughts (51:00)

Quotes from Today’s Episode

“Data products, at least to us at Infocepts, refers to a way of thinking about and organizing your data in a way so that it drives consumption, and most importantly, actions.” - Shashank Garg (1:44) “The way I see it is [that] the role of a DPM (data product manager)—whether they have the title or not—is benefits creation. You need to be responsible for benefits, not for outputs. The outputs have to create benefits or it doesn’t count. Game over” - Brian O’Neill (10:07) We talk about bridging the gap between the worlds of business and analytics... There's a huge gap between the perception of users and the tech leaders who are producing it." - Shashank Garg (17:37) “IT leaders often limit their roles to provisioning their secure data, and then they rely on businesses to be able to generate insights and take actions. Sometimes this handoff works, and sometimes it doesn’t because of quality governance.” - Shashank Garg  (23:02) “Data is the kind of field where people can react very, very quickly to what’s wrong.”  - Shashank Garg (29:44) “It’s much easier to get to a good prototype if we know what the inputs to a prototype are, which include data about the people who are going to use the solution, their usage scenarios, use cases, attitudes, beliefs…all these kinds of things.” - Brian O’Neill (31:49) “For data, you need a separate person, and then for designing, you need a separate person, and for analysis, you need a separate person—the more you can combine, I don’t think you can create super-humans who can do all three, four disciplines, but at least two disciplines and can appreciate the third one that makes it easier.” - Shashank Garg (39:20) “When we think of AI, we’re all talking about multiple different delivery methods here. I think AI is starting to become GenAI to a lot of non-data people. It’s like their—everything is GenAI.” -  Brian O'Neill (43:48)

Links

Infocepts website: https://www.infocepts.ai/ Shashank Garg on LinkedIn: https://www.linkedin.com/in/shashankgarg/  Top 5 Data & AI initiatives for business success: https://www.infocepts.ai/downloads/top-5-data-and-ai-initiatives-to-drive-business-growth-in-2024-beyond/

143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help

2024-05-14 Listen
podcast_episode
Brian O’Neill (Designing for Analytics)

Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you! 

Highlights/ Skip to 

I discuss how specific UI/UX design problems can significantly impact business performance (02:51) I discuss five common reasons why enterprise software leaders typically reach out for help (04:39) The 20 common symptoms I've observed in client engagements that indicate the need for professional UI/UX intervention or training (13:22) The dangers of adding too many features or customization and how it can overwhelm users (16:00) The issues of integrating  AI into user interfaces and UXs without proper design thinking  (30:08) I encourage listeners to apply the insights shared to improve their data products (48:02)

Quotes from Today’s Episode “One of the problems with bad design is that some of it we can see and some of it we can't — unless you know what you're looking for." - Brian O’Neill (02:23) “Design is usually not top of mind for an enterprise software product, especially one in the machine learning and analytics space. However, if you have human users, even enterprise ones, their tolerance for bad software is much lower today than in the past.” Brian O’Neill - (13:04) “Early on when you're trying to get product market fit, you can't be everything for everyone. You need to be an A+ experience for the person you're trying to satisfy.” -Brian O’Neill (15:39) “Often when I see customization, it is mostly used as a crutch for not making real product strategy and design decisions.”  - Brian O’Neill (16:04)  "Customization of data and dashboard products may be more of a tax than a benefit. In the marketing copy, customization sounds like a benefit...until you actually go in and try to do it. It puts the mental effort to design a good solution on the user." - Brian O’Neill (16:26) “We need to think strategically when implementing Gen AI or just AI in general into the product UX because it won’t automatically help drive sales or increase business value.” - Brian O’Neill (20:50)  “A lot of times our analytics and machine learning tools… are insight decision support products. They're supposed to be rooted in facts and data, but when it comes to designing these products, there's not a whole lot of data and facts that are actually informing the product design choices.” Brian O’Neill - (30:37) “If your IP is that special, but also complex, it needs the proper UI/UX design treatment so that the value can be surfaced in such a way someone is willing to pay for it if not also find it indispensable and delightful.” - Brian O’Neill (45:02)

Links The (5) big reasons AI/ML and analytics product leaders invest in UI/UX design help: https://designingforanalytics.com/resources/the-5-big-reasons-ai-ml-and-analytics-product-leaders-invest-in-ui-ux-design-help/  Subscribe for free insights on designing useful, high-value enterprise ML and analytical data products: https://designingforanalytics.com/list  Access my free frameworks, guides, and additional reading for SAAS leaders on designing high-value ML and analytical data products: https://designingforanalytics.com/resources Need help getting your product’s design/UX on track—so you can see more sales, less churn, and higher user adoption? Schedule a free 60-minute Discovery Call with me and I’ll give you my read on your situation and my recommendations to get ahead:https://designingforanalytics.com/services/

142 - Live Webinar Recording: My UI/UX Design Audit of a New Podcast Analytics Service w/ Chris Hill (CEO, Humblepod)

2024-04-30 Listen
podcast_episode

Welcome to a special edition of Experiencing Data. This episode is the audio capture from a live Crowdcast video webinar I gave on April 26th, 2024 where I conducted a mini UI/UX design audit of a new podcast analytics service that Chris Hill, CEO of Humblepod, is working on to help podcast hosts grow their show. Humblepod is also the team-behind-the-scenes of Experiencing Data, and Chris had asked me to take a look at his new “Listener Lifecycle” tool to see if we could find ways to improve the UX and visualizations in the tool, how we might productize this MVP in the future, and how improving the tool’s design might help Chris help his prospective podcast clients learn how their listener data could help them grow their listenership and “true fans.”

On a personal note, it was fun to talk to Chris on the show given we speak every week:  Humblepod has been my trusted resource for audio mixing, transcription, and show note summarizing for probably over 100 of the most recent episodes of Experiencing Data. It was also fun to do a “live recording” with an audience—and we did answer questions in the full video version. (If you missed the invite, join my Insights mailing list to get notified of future free webinars).

To watch the full audio and video recording on Crowdcast, free, head over to: https://www.crowdcast.io/c/podcast-analytics-ui-ux-design

Highlights/ Skip to: Chris talks about using data to improve podcasts and his approach to podcast numbers  (03:06) Chris introduces the Listener Lifecycle model which informed the dashboard design (08:17) Chris and I discuss the importance of labeling and terminology in analytics UIs (11:00) We discuss designing for practical use of analytics dashboards to provide actionable insights (17:05) We discuss the challenges podcast hosts face in understanding and utilizing data effectively and how design might help (21:44) I discuss how my CED UX framework for advanced analytics applications helps to facilitate actionable insights (24:37) I highlight the importance of presenting data effectively and in a way that centers to user needs (28:50) I express challenges users may have with podcast rankings and the reliability of data sources (34:24)  Chris and I discuss tailoring data reports to meet the specific needs of clients (37:14)

Quotes from Today’s Episode “The irony for me as someone who has a podcast about machine learning and analytics and design is that I basically never look at my analytics.” - Brian O’Neill (01:14) “The problem that I have found in podcasting is that the number that everybody uses to gauge whether a podcast is good or not is the download number…But there’s a lot of other factors in a podcast that can tell you how successful it’s going to be…where you can pull levers to…grow your show, or engage more with an audience.” - Chris Hill (03:20) “I have a framework for user experience design for analytics called CED, which stands for Conclusions, Evidence, Data… The basic idea is really simple: lead your analytic service with conclusions.”- Brian O’Neill (24:37) “Where the eyes glaze over is when tools are mostly about evidence generators, and we just give everybody the evidence, but there’s no actual analysis about how [this is] helping me improve my life or my business. It’s just evidence. I need someone to put that together.” - Brian O’Neill (25:23) “Sometimes the data doesn’t provide enough of a conclusion about what to do…This is where your opinion starts to matter” - Brian O’Neill (26:07) “It sounds like a benefit, but drilling down for most people into analytics stuff is usually a tax unless you’re an analyst.” - Brian O’Neill (27:39) “Where’s the source of this data, and who decided what these numbers are? Because so much of this stuff…is not shared. As someone who’s in this space, it’s not even that it’s confusing. It’s more like, you got to distill this down for me.” - Brian O’Neill (34:57) “Your clients are probably going to glaze over at this level of data because it’s not helping them make any decision about what to change.”- Brian O’Neill (37:53)

Links Watch the original Crowdcast video recording of this episode Brian’s CED UX Framework for Advanced Analytics Solutions Join Brian’s Insights mailing list

139 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode)

2024-03-19 Listen
podcast_episode
Zalak Trivedi (Sigma Computing) , Brian O’Neill (Designing for Analytics)

This week on Experiencing Data, something new as promised at the beginning of the year. Today, I’m exploring the world of embedded analytics with Zalak Trivedi from Sigma Computing—and this is also the first approved Promoted Episode on the podcast. In today’s episode, Zalak shares his journey as the product lead for Sigma’s embedded analytics and reporting solution which seeks to accelerate and simplify the deployment of decision support dashboards to their SAAS companies’ customers. Right there, we have the first challenge that Zalak was willing to dig into with me: designing a platform UX when we have multiple stakeholder and user types. In Sigma’s case, this means Sigma’s buyers, the developers that work at these SAAS companies to integrate Sigma into their products, and then the actual customers of these SAAS companies who will be the final end users of the resulting dashboards.  also discuss the challenges of creating products that serve both beginners and experts and how AI is being used in the BI industry.  

Highlights/ Skip to:

I introduce Zalak Trivedi from Sigma Computing onto the show (03:15) Zalak shares his journey leading the vision for embedded analytics at Sigma and explains what Sigma looks like when implemented into a customer’s SAAS product . (03:54) Zalak and I discuss the challenge of integrating Sigma's analytics into various companies' software, since they need to account for a variety of stakeholders. (09:53) We explore Sigma's team approach to user experience with product management, design, and technical writing (15:14) Zalak reveals how Sigma leverages telemetry to understand and improve user interactions with their products (19:54) Zalak outlines why Sigma is a faster and more supportive alternative to building your own analytics (27:21) We cover data monetization, specifically looking at how SAAS companies can monetize analytics and insights (32:05) Zalak highlights how Sigma is integratingAI into their BI solution (36:15) Zalak share his customers' current pain points and interests (40:25)  We wrap up with final thoughts and ways to connect with Zalak and learn more about Sigma (49:41) 

Quotes from Today’s Episode "Something I’m really excited about personally that we are working on is [moving] beyond analytics to help customers build entire data applications within Sigma. This is something we are really excited about as a company, and marching towards [achieving] this year." - Zalak Trivedi (04:04)

“The whole point of an embedded analytics application is that it should look and feel exactly like the application it’s embedded in, and the workflow should be seamless.” - Zalak Trivedi (09:29) 

“We [at Sigma] had to switch the way that we were thinking about personas. It was not just about the analysts or the data teams; it was more about how do we give the right tools to the [SAAS] product managers and developers to embed Sigma into their product.” - Zalak Trivedi (11:30)  “You can’t not have a design, and you can’t not have a user experience. There’s always an experience with every tool, solution, product that we use, whether it emerged organically as a byproduct, or it was intentionally created through knowledge data... it was intentional” - Brian O’Neill (14:52) 

“If we find that [in] certain user experiences,people are tripping up, and they’re not able to complete an entire workflow, we flag that, and then we work with the product managers, or [with] our customers essentially, and figure out how we can actually simplify these experiences.” - Zalak Trivedi (20:54)

“We were able to convince many small to medium businesses and startups to sign up with Sigma. The success they experienced after embedding Sigma was tremendous. Many of our customers managed to monetize their existing data within weeks, or at most, a couple of months, with lean development teams of two to three developers and a few business-side personnel, generating seven-figure income streams from that.” - Zalak Trivedi (32:05)

“At Sigma, our stance is, let’s not just add AI for the sake of adding AI. Let’s really identify [where] in the entire user journey does the intelligence really lie, and where are the different friction points, and let’s enhance those experiences.” - Zalak Trivedi (37:38)  “Every time [we at Sigma Computing] think about a new feature or functionality, we have to ensure it works for both the first-degree persona and the second-degree persona, and consider how it will be viewed by these different personas, because that is not the primary persona for which the foundation of the product was built." - Zalak Trivedi (48:08)

Links Sigma Computing: https://sigmacomputing.com

Email: [email protected] 

LinkedIn: https://www.linkedin.com/in/trivedizalak/

Sigma Computing Embedded: https://sigmacomputing.com/embedded

About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted

138 - VC Spotlight: The Impact of AI on SAAS and Data/Developer Products in 2024 w/ Ellen Chisa of BoldStart Ventures

2024-03-05 Listen
podcast_episode
Ellen Chisa (BoldStart Ventures) , Brian T. O’Neill

In this episode of Experiencing Data, I speak with Ellen Chisa, Partner at BoldStart Ventures, about what she’s seeing in the venture capital space around AI-driven products and companies—particularly with all the new GenAI capabilities that have emerged in the last year. Ellen and I first met when we were both engaged in travel tech startups in Boston over a decade ago, so it was great to get her current perspective being on the “other side” of products and companies working as a VC.  Ellen draws on her experience in product management and design to discuss how AI could democratize software creation and streamline backend coding, design integration, and analytics. We also delve into her work at Dark and the future prospects for developer tools and SaaS platforms. Given Ellen’s background in product management, human-centered design, and now VC, I thought she would have a lot to share—and she did!

Highlights/ Skip to: I introduce the show and my guest, Ellen Chisa (00:00) Ellen discusses her transition from product and design to venture capital with BoldStart Ventures. (01:15) Ellen notes a shift from initial AI prototypes to more refined products, focusing on building and testing with minimal data. (03:22) Ellen mentions BoldStart Ventures' focus on early-stage companies providing developer and data tooling for businesses.  (07:00) Ellen discusses what she learned from her time at Dark and Lola about narrowing target user groups for technology products (11:54) Ellen's Insights into the importance of user experience is in product design and the process venture capitalists endure to make sure it meets user needs (15:50) Ellen gives us her take on the impact of AI on creating new opportunities for data tools and engineering solutions, (20:00) Ellen and I explore the future of user interfaces, and how AI tools could enhance UI/UX for end users. (25:28) Closing remarks and the best way to find Ellen on online (32:07)

Quotes from Today’s Episode “It's a really interesting time in the venture market because on top of the Gen AI wave, we obviously had the macroeconomic shift. And so we've seen a lot of people are saying the companies that come out now are going to be great companies because they're a little bit more capital-constrained from the beginning, typically, and they'll grow more thoughtfully and really be thinking about how do they build an efficient business.”- Ellen Chisa (03: 22) 

“We have this big technological shift around AI-enabled companies, and I think one of the things I’ve seen is, if you think back to a year ago, we saw a lot of early prototyping, and so there were like a couple of use cases that came up again and again.”-Ellen Chisa (3:42)

“I don't think I've heard many pitches from founders who consider themselves data scientists first. We definitely get some from ML engineers and people who think about data architecture, for sure..”- Ellen Chisa (05:06)  

“I still prefer GUI interfaces to voice or text usually, but I think that might be an uncanny valley sort of thing where if you think of people who didn’t have technology growing up, they’re more comfortable with the more human interaction, and then you get, like, a chunk of people who are digital natives who prefer it.”- Ellen Chisa (24:51)

[Citing some excellent Boston-area restaurants!] “The Arc browser just shipped a bunch of new functionality, where instead of opening a bunch of tabs, you can say, “Open the recipe pages for Oleana and Sarma,” and it just opens both of them, and so it’s like multiple search queries at once.” - Ellen Chisa (27:22)

“The AI wave of  technology biases towards people who already have products [in the market] and have existing datasets, and so I think everyone [at tech companies] is getting this big, top-down mandate from their executive team, like, ‘Oh, hey, you have to do something with AI now.’”- Ellen Chisa (28:37)

“I think it’s hard to really grasp what an LLM is until you do a fair amount of experimentation on your own. The experience of asking ChatGPT a simple search question compared to the experience of trying to train it to do something specific for you are quite different experiences. Even beyond that, there’s a tool called superwhisper that I like that you can take audio content and end up with transcripts, but you can give it prompts to change your transcripts as you’re going. So, you can record something, and it will give you a different output if you say you’re recording an email compared to [if] you’re recording a journal entry compared to [if] you’re recording the transcript for a podcast.”- Ellen Chisa (30:11)

Links Boldstart ventures: https://boldstart.vc/ LinkedIn: https://www.linkedin.com/in/ellenchisa/ Personal website: https://ellenchisa.com Email: [email protected] 

137 - Immature Data, Immature Clients: When Are Data Products the Right Approach? feat. Data Product Architect, Karen Meppen

2024-02-20 Listen
podcast_episode

This week, I'm chatting with Karen Meppen, a founding member of the Data Product Leadership Community and a Data Product Architect and Client Services Director at Hakkoda. Today, we're tackling the difficult topic of developing data products in situations where a product-oriented culture and data infrastructures may still be emerging or “at odds” with a human-centered approach. Karen brings extensive experience and a strong belief in how to effectively negotiate the early stages of data maturity. Together we look at the major hurdles that businesses encounter when trying to properly exploit data products, as well as the necessity of leadership support and strategy alignment in these initiatives. Karen's insights offer a roadmap for those seeking to adopt a product and UX-driven methodology when significant tech or cultural hurdles may exist.

Highlights/ Skip to:

I Introduce Karen Meppen and the challenges of dealing with data products in places where the data and tech aren't quite there yet (00:00) Karen shares her thoughts on what it's like working with "immature data" (02:27) Karen breaks down what a data product actually is (04:20) Karen and I discuss why having executive buy-in is crucial for moving forward with data products (07:48) The sometimes fuzzy definition of "data products." (12:09) Karen defines “shadow data teams” and explains how they sometimes conflict with tech teams (17:35) How Karen identifies the nature of each team to overcome common hurdles of connecting tech teams with business units (18:47) How she navigates conversations with tech leaders who think they already understand the requirements of business users (22:48) Using design prototypes and design reviews with different teams to make sure everyone is on the same page about UX (24:00) Karen shares stories from earlier in her career that led her to embrace human-centered design to ensure data products actually meet user needs (28:29) We reflect on our chat about UX, data products, and the “producty” approach to ML and analytics solutions (42:11) 

Quotes from Today’s Episode "It’s not really fair to get really excited about what we hear about or see on LinkedIn, at conferences, etc. We get excited about the shiny things, and then want to go straight to it when [our] organization [may not be ] ready to do that, for a lot of reasons." - Karen Meppen (03:00)

"If you do not have support from leadership and this is not something [they are]  passionate about, you probably aren’t a great candidate for pursuing data products as a way of working." - Karen Meppen (08:30)

"Requirements are just friendly lies." - Karen, quoting Brian about how data teams need to interpret stakeholder requests  (13:27)

"The greatest challenge that we have in technology is not technology, it’s the people, and understanding how we’re using the technology to meet our needs." - Karen Meppen (24:04)

"You can’t automate something that you haven’t defined. For example, if you don’t have clarity on your tagging approach for your PII, or just the nature of all the metadata that you’re capturing for your data assets and what it means or how it’s handled—to make it good, then how could you possibly automate any of this that hasn’t been defined?" - Karen Meppen (38:35)

"Nothing upsets an end-user more than lifting-and-shifting an existing report with the same problems it had in a new solution that now they’ve never used before." - Karen Meppen (40:13)

“Early maturity may look different in many ways depending upon the nature of  business you’re doing, the structure of your data team, and how it interacts with folks.” (42:46) 

Links  Data Product Leadership Community https://designingforanalytics.com/community/ Karen Meppen on LinkedIn: ​​https://www.linkedin.com/in/karen--m/ Hakkōda, Karen's company, for more insights on data products and services:https://hakkoda.io/ 

134 - What Sanjeev Mohan Learned Co-Authoring “Data Products for Dummies”

2024-01-09 Listen
podcast_episode
Sanjeev Mohan (Gartner (former)) , Brian T. O’Neill

In this episode, I’m chatting with former Gartner analyst Sanjeev Mohan who is the Co-Author of Data Products for Dummies. Throughout our conversation, Sanjeev shares his expertise on the evolution of data products, and what he’s seen as a result of implementing practices that prioritize solving for use cases and business value. Sanjeev also shares a new approach of structuring organizations to best implement ownership and accountability of data product outcomes. Sanjeev and I also explore the common challenges of product adoption and who is responsible for user experience. I purposefully had Sanjeev on the show because I think we have pretty different perspectives from which we see the data product space.

Highlights/ Skip to:

I introduce Sanjeev Mohan, co-author of Data Products for Dummies (00:39) Sanjeev expands more on the concept of writing a “for Dummies” book   (00:53) Sanjeev shares his definition of a data product, including both a technical and a business definition (01:59) Why Sanjeev believes organizational changes and accountability are the keys to preventing the acceleration of shipping data products with little to no tangible value (05:45) How Sanjeev recommends getting buy-in for data product ownership from other departments in an organization (11:05) Sanjeev and I explore adoption challenges and the topic of user experience (13:23) Sanjeev explains what role is responsible for user experience and design (19:03) Who should be responsible for defining the metrics that determine business value (28:58) Sanjeev shares some case studies of companies who have adopted this approach to data products and their outcomes (30:29) Where companies are finding data product managers currently (34:19) Sanjeev expands on his perspective regarding the importance of prioritizing business value and use cases (40:52) Where listeners can get Data Products for Dummies, and learn more about Sanjeev’s work (44:33)

Quotes from Today’s Episode “You may slap a label of data product on existing artifact; it does not make it a data product because there’s no sense of accountability. In a data product, because they are following product management best practices, there must be a data product owner or a data product manager. There’s a single person [responsible for the result]. — Sanjeev Mohan (09:31)

“I haven’t even mentioned the word data mesh because data mesh and data products, they don’t always have to go hand-in-hand. I can build data products, but I don’t need to go into the—do all of data mesh principles.” – Sanjeev Mohan (26:45)

“We need to have the right organization, we need to have a set of processes, and then we need a simplified technology which is standardized across different teams. So, this way, we have the benefit of reusing the same technology. Maybe it is Snowflake for storage, DBT for modeling, and so on. And the idea is that different teams should have the ability to bring their own analytical engine.” – Sanjeev Mohan (27:58)

“Generative AI, right now as we are recording, is still in a prototyping phase. Maybe in 2024, it’ll go heavy-duty production. We are not in prototyping phase for data products for a lot of companies. They’ve already been experimenting for a year or two, and now they’re actually using them in production. So, we’ve crossed that tipping point for data products.” – Sanjeev Mohan (33:15)

“Low adoption is a problem that’s not just limited to data products. How long have we had data catalogs, but they have low adoption. So, it’s a common problem.” – Sanjeev Mohan (39:10)

“That emphasis on technology first is a wrong approach. I tell people that I’m sorry to burst your bubble, but there are no technology projects, there are only business projects. Technology is an enabler. You don’t do technology for the sake of technology; you have to serve a business cause, so let’s start with that and keep that front and center.” – Sanjeev Mohan (43:03)

Links Data Products for Dummies: https://www.dataops.live/dataproductsfordummies “What Exactly is A Data Product” article: https://medium.com/data-mesh-learning/what-exactly-is-a-data-product-7f6935a17912 It Depends: https://www.youtube.com/@SanjeevMohan Chief Data Analytics and Product Officer of Equifax: https://www.youtube.com/watch?v=kFY7WGc-jFM SanjMo Consulting: https://www.sanjmo.com/ dataops.live: https://dataops.live dataops.live/dataproductsfordummies: https://dataops.live/dataproductsfordummies LinkedIn: https://www.linkedin.com/in/sanjmo/ Medium articles: https://sanjmo.medium.com

133 - New Experiencing Data Interviews Coming in January 2024

2023-12-26 Listen
podcast_episode

Today I am sharing some highlights for 2023 from the podcast, and also letting you all know I’ll be taking a break from the podcast for the rest of December, but I’ll be back with a new episode on January 9th, 2024. I’ve also got two links to share with you—details inside!

Transcript Greetings everyone - I’m taking a little break from Experiencing Data over December of 2023, but I’ll be back in January with more interviews and insights on leveraging UX design and product management to create indispensable data products, machine learning apps, and decision support tools. 

Experiencing Data turned this year five years old back in November, with over 130 episodes to date! I still can’t believe it’s been going that long and how far we’ve come. 

Some highlights for me in 2023 included launching the Data Product Leadership Community, finding out that the show is now in the top 2% of all podcasts worldwide according to ListenNotes, and most of all, hearing from you that the podcast, and my writing, and the guests that  I have brought on are having an impact on your work, your careers, and hopefully the lives of your customers, users, and stakeholders as well! 

So, for now, I’ve got just two links for you:

If you’re wondering how to either:

support the show yourself with a really fast review on Apple Podcasts, to record a quick audio question for me to answer on the show,  or if you want to join my free Insights mailing lists where I share my bi-weekly ideas and thoughts and 1-page episode summaries of all the show drops that I put out here on Experiencing Data.

…just head over to designingforanalytics.com/podcast and you’ll get links to all those things there.

And secondly, if you need help increasing customer adoption, delight, the business value, or the usability of your analytics and machine learning applications in 2024, I invite you to set up a free discovery call with me 1 on 1. 

You bring the questions, I’ll bring my ears, and by the end of the call, I’ll give you my best advice on how to move forward with your situation – whether it’s working with me or not. To schedule one of those free discovery calls, visit designingforanalytics.com/go

And finally, there will be some news coming out next year with the show, as well as my business, so I hope you’ll hop on the mailing list and stay tuned, that’s probably the best place to do that. And if you celebrate holidays in December and January, I hope they’re safe, enjoyable, and rejuvenating. Until 2024, stay tuned right here - and in the words of the great Arnold Schwarzenegger, I’ll be back.

132 - Leveraging Behavioral Science to Increase Data Product Adoption with Klara Lindner

2023-12-12 Listen
podcast_episode
Klara Lindner (diconium data) , Brian T. O’Neill

In this conversation with Klara Lindner, Service Designer at diconium data, we explore how behavioral science and UX can be used to increase adoption of data products. Klara describes how she went from having a highly technical career as an electrical engineer and being the founder of a solar startup to her current role in service design for data products. Klara shares powerful insights into the value of user research and human-centered design, including one which stopped me in my tracks during this episode: how the people making data products and evangelizing data-driven decision making aren’t actually following their own advice when it comes to designing their data products. Klara and I also explore some easy user research techniques that data professionals can use, and discuss who should ultimately be responsible for user adoption of data products. Lastly, Klara gives us a peek at her upcoming December 19th, 2023 webinar with the The Data Product Leadership Community (DPLC) where she will be going deeper on two frameworks from psychology and behavioral science that teams can use to increase adoption of data products. Klara is also a founding member of the DPLC and was one of—if not the very first—design/UX professionals to join.

Highlights/ Skip to:

I introduce Klara, and she explains the role of Service Design to our audience (00:49) Klara explains how she realized she’s been doing design work longer than she thought by reflecting on the company she founded, Mobisol (02:09) How Klara balances the desire to design great dashboards with the mission of helping end users (06:15) Klara describes the psychology behind user research and her upcoming talk on December 19th at The Data Product Leadership Community (08:32) What data product teams can do as a starting point to begin implementing user research principles (10:52)  Klara gives a powerful example of the type of insight and value even basic user research can provide (12:49) Klara and I discuss a key revelation when it comes to designing data products for users, which is the irony that even developers use intuition as well as quantitative data when building (16:43) What adjustments Klara had to make in her thinking when moving from a highly technical background to doing human-centered design (21:08) Klara describes the two frameworks for driving adoption that she’ll be sharing in her talk at the DPLC on December 19th (24:23) An example of how understanding and addressing adoption blockers is important for product and design teams (30:44) How Klara has seen her teams adopt a new way of thinking about product & service design (32:55) Klara gives her take on the Jobs to be Done framework, which she will also be sharing in her talk at the DPLC on December 19th (35:26) Klara’s advice to teams that are looking to build products around generative AI (39:28) Where listeners can connect with Klara to learn more (41:37)

Links diconium data: http://www.diconium.com/ LinkedIn: https://www.linkedin.com/in/klaralindner/ Personal Website: https://magic-investigations.com/ Hear Klara speak on Dec 19, 2023 at 10am ET here: https://designingforanalytics.com/community/

131 - 15 Ways to Increase User Adoption of Data Products (Without Handcuffs, Threats and Mandates) with Brian T. O’Neill

2023-11-28 Listen
podcast_episode

This week I’m covering Part 1 of the 15 Ways to Increase User Adoption of Data Products, which is based on an article I wrote for subscribers of my mailing list. Throughout this episode, I describe why focusing on empathy, outcomes, and user experience leads to not only better data products, but also better business outcomes. The focus of this episode is to show you that it’s completely possible to take a human-centered approach to data product development without mandating behavioral changes, and to show how this approach benefits not just end users, but also the businesses and employees creating these data products. 

Highlights/ Skip to:

Design behavior change into the data product. (05:34) Establish a weekly habit of exposing technical and non-technical members of the data team directly to end users of solutions - no gatekeepers allowed. (08:12) Change funding models to fund problems, not specific solutions, so that your data product teams are invested in solving real problems. (13:30) Hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders. Reject projects that have vague outcomes defined. (16:49) Approach the creation of data products as “user experiences” instead of a “thing” that is being built that has different quality attributes. (20:16) If the team is tasked with being “innovative,” leaders need to understand the innoficiency problem, shortened iterations, and the importance of generating a volume of ideas (bad and good) before committing to a final direction. (23:08) Co-design solutions with [not for!] end users in low, throw-away fidelity, refining success criteria for usability and utility as the solution evolves. Embrace the idea that research/design/build/test is not a linear process. (28:13) Test (validate) solutions with users early, before committing to releasing them, but with a pre-commitment to react to the insights you get back from the test. (31:50)

Links:

15 Ways to Increase Adoption of Data Products: https://designingforanalytics.com/resources/15-ways-to-increase-adoption-of-data-products-using-techniques-from-ux-design-product-management-and-beyond/ Company website: https://designingforanalytics.com Episode 54: https://designingforanalytics.com/resources/episodes/054-jared-spool-on-designing-innovative-ml-ai-and-analytics-user-experiences/ Episode 106: https://designingforanalytics.com/resources/episodes/106-ideaflow-applying-the-practice-of-design-and-innovation-to-internal-data-products-w-jeremy-utley/ Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586/ Podcast website: https://designingforanalytics.com/podcast

129 - Why We Stopped, Deleted 18 Months of ML Work, and Shifted to a Data Product Mindset at Coolblue

2023-10-31 Listen
podcast_episode

Today I’m joined by Marnix van de Stolpe, Product Owner at Coolblue in the area of data science. Throughout our conversation, Marnix shares the story of how he joined a data science team that was developing a solution that was too focused on the delivery of a data-science metric that was not on track to solve a clear customer problem. We discuss how Marnix came to the difficult decision to throw out 18 months of data science work, what it was like to switch to a human-centered, product approach, and the challenges that came with it. Marnix shares the impact this decision had on his team and the stakeholders involved, as well as the impact on his personal career and the advice he would give to others who find themselves in the same position. Marnix is also a Founding Member of the Data Product Leadership Community and will be going much more into the details and his experience live on Zoom on November 16 @ 2pm ET for members.

Highlights/ Skip to:

I introduce Marnix, Product Owner at Coolblue and one of the original members of the Data Product Leadership Community (00:35) Marnix describes what Coolblue does and his role there (01:20) Why and how Marnix decided to throw away 18 months of machine learning work (02:51) How Marnix determined that the KPI (metric) being created wasn’t enough to deliver a valuable product (07:56) Marnix describes the conversation with his data science team on mapping the solution back to the desired outcome (11:57) What the culture is like at Coolblue now when developing data products (17:17) Marnix’s advice for data product managers who are coming into an environment where existing work is not tied to a desired outcome (18:43) Marnix and I discuss why data literacy is not the solution to making more impactful data products (21:00) The impact that Marnix’s human-centered approach to data product development has had on the stakeholders at Coolblue (24:54) Marnix shares the ultimate outcome of the product his team was developing to measure product returns (31:05) How you can get in touch with Marnix (33:45)

Links Coolblue: https://www.coolblue.nl LinkedIn: https://www.linkedin.com/in/marnixvdstolpe/

125 - Human-Centered XAI: Moving from Algorithms to Explainable ML UX with Microsoft Researcher Vera Liao

2023-09-05 Listen
podcast_episode

Today I’m joined by Vera Liao, Principal Researcher at Microsoft. Vera is a part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group, and her research centers around the ethics, explainability, and interpretability of AI products. She is particularly focused on how designers design for explainability. Throughout our conversation, we focus on the importance of taking a human-centered approach to rendering model explainability within a UI, and why incorporating users during the design process informs the data science work and leads to better outcomes. Vera also shares some research on why example-based explanations tend to out-perform [model] feature-based explanations, and why traditional XAI methods LIME and SHAP aren’t the solution to every explainability problem a user may have.

Highlights/ Skip to:

I introduce Vera, who is Principal Researcher at Microsoft and whose research mainly focuses on the ethics, explainability, and interpretability of AI (00:35) Vera expands on her view that explainability should be at the core of ML applications (02:36) An example of the non-human approach to explainability that Vera is advocating against (05:35) Vera shares where practitioners can start the process of responsible AI (09:32) Why Vera advocates for doing qualitative research in tandem with model work in order to improve outcomes (13:51) I summarize the slides I saw in Vera’s deck on Human-Centered XAI and Vera expands on my understanding (16:06) Vera’s success criteria for explainability (19:45) The various applications of AI explainability that Vera has seen evolve over the years (21:52) Why Vera is a proponent of example-based explanations over model feature ones (26:15) Strategies Vera recommends for getting feedback from users to determine what the right explainability experience might be (32:07) The research trends Vera would most like to see technical practitioners apply to their work (36:47) Summary of the four-step process Vera outlines for Question-Driven XAI design (39:14)

Links “Human-Centered XAI: From Algorithms to User Experiences” Presentation “Human-Centered XAI: From Algorithms to User Experiences” Slide Deck  “Human-Centered AI Transparency in the Age of Large Language Models” MSR Microsoft Research Vera's Personal Website

124 - The PiCAA Framework: My Method to Generate ML/AI Use Cases from a UX Perspective

2023-08-22 Listen
podcast_episode

In this episode, I give an overview of my PiCAA Framework, which is a framework I shared at my keynote talk for Netguru’s annual conference, Burning Minds. This framework helps with brainstorming machine learning use cases or reverse engineering them, starting with the tactic. Throughout the episode, I give context to the preliminary types of work and preparation you and your team would want to do before implementing PiCAA, as well as the process and potential pitfalls you may run into, and the end results that make it a beneficial tool to experiment with. 

Highlights/ Skip to:

Where/ how you might implement the PiCAA Framework (1:22) Focusing on the human part of your ideas (5:04) Keynote excerpt outlining the PiCAA Framework (7:28) Closing a PiCAA workshop by exploring what could go wrong (18:03)

Links Experiencing Data Episode 106 with Jeremy Utley The Data Product Leadership Community Ask me a question (below the recent episodes)

121 - How Sainsbury’s Head of Data Products for Analytics and ML Designs for User Adoption with Peter Everill

2023-07-11 Listen
podcast_episode
Brian T. O’Neill , Peter Everill (Sainsbury’s)

Today I’m chatting with Peter Everill, who is the Head of Data Products for Analytics and ML Designs at the UK grocery brand, Sainsbury’s. Peter is also a founding member of the Data Product Leadership Community. Peter shares insights on why his team spends so much time conducting discovery work with users, and how that leads to higher adoption and in turn, business value. Peter also gives us his in-depth definition of a data product, including the three components of a data product and the four types of data products he’s encountered. He also shares the 8-step product management methodology that his team uses to develop data products that truly deliver value to end users. Pete also shares the #1 resource he would invest in right now to make things better for his team and their work.

Highlights/ Skip to:

I introduce Peter, who I met through the Data Product Leadership Community (00:37) What the data team structure at Sainsbury’s looks like and how Peter wound up working there (01:54) Peter shares the 8-step product management methodology that has been developed by his team and where in that process he spends most of his time (04:54) How involved the users are in Peter’s process when it comes to developing data products (06:13) How Peter was able to ensure that enough time is taken on discovery throughout the design process (10:03) Who on Peter’s team is doing the core user research for product development (14:52) Peter shares the three things that he feels make data product teams successful (17:09) How Peter defines a data product, including the three components of a data product and the four types of data products (18:34) Peter and I discuss the importance of spending time in discovery (24:25) Peter explains why he measures reach and impact as metrics of success when looking at implementation (26:18) How Peter solves for the gap when handing off a product to the end users to implement and adopt (29:20) How Peter hires for data product management roles and what he looks for in a candidate (33:31) Peter talks about what roles or skills he’d be looking for if he was to add a new person to his team (37:26)

Quotes from Today’s Episode “I’m a big believer that the majority of analytics in its simplest form is improving business processes and decisions. A big part of our discovery work is that we align to business areas, business divisions, or business processes, and we spend time in that discovery space actually mapping the business process. What is the goal of this process? Ultimately, how does it support the P&L?” — Peter Everill (12:29)

“There’s three things that are successful for any organization that will make this work and make it stick. The first is defining what you mean by a data product. The second is the role of a data product manager in the organization and really being clear what it is that they do and what they don’t do. … And the third thing is their methodology, from discovery through to delivery. The more work you put upfront defining those and getting everyone trained and clear on that, I think the quicker you’ll get to an organization that’s really clear about what it’s delivering, how it delivers, and who does what.” – Peter Everill (17:31)

“The important way that data and analytics can help an organization firstly is, understanding how that organization is performing. And essentially, performance is how well processes and decisions within the organization are being executed, and the impact that has on the P&L.” – Peter Everill (20:24)

“The great majority of organizations don’t allocate that percentage [20-25%] of time to discovery; they are jumping straight into solution. And also, this is where organizations typically then actually just migrate what already exists from, maybe, legacy service into a shiny new cloud platform, which might be good from a defensive data strategy point of view, but doesn’t offer new net value—apart from speed, security and et cetera of the cloud. Ultimately, this is why analytics organizations aren’t generally delivering value to organizations.” – Peter Everill (25:37)

“The only time that value is delivered, is from a user taking action. So, the two metrics that we really focus on with all four data products [are] reach [and impact].” – Peter Everill (27:44)

“In terms of benefits realization, that is owned by the business unit. Because ultimately, you’re asking them to take the action. And if they do, it’s their part of the P&L that’s improving because they own the business, they own the performance. So, you really need to get them engaged on the release, and for them to have the superusers, the champions of the product, and be driving voice of the release just as much as the product team.” – Peter Everill (30:30)

On hiring DPMs: “Are [candidates] showing the aptitude, do they understand what the role is, rather than the experience? I think data and analytics and machine learning product management is a relatively new role. You can’t go on LinkedIn necessarily, and be exhausted with a number of candidates that have got years and years of data and analytics product management.” – Peter Everill (36:40)

Links LinkedIn: https://www.linkedin.com/in/petereverill/

120 - The Portfolio Mindset: Data Product Management and Design with Nadiem von Heydebrand (Part 2)

2023-06-27 Listen
podcast_episode

Today I’m continuing my conversation with Nadiem von Heydebrand, CEO of Mindfuel. In the conclusion of this special 2-part episode, Nadiem and I discuss the role of a Data Product Manager in depth. Nadiem reveals which fields data product managers are currently coming from, and how a new data product manager with a non-technical background can set themselves up for success in this new role. He also walks through his portfolio approach to data product management, and how to prioritize use cases when taking on a data product management role. Toward the end, Nadiem also shares personal examples of how he’s employed these strategies, why he feels it’s so important for engineers to be able to see and understand the impact of their work, and best practices around developing a data product team. 

Highlights / Skip to:

Brian introduces Nadiem and gives context for why the conversation with Nadiem led to a two-part episode (00:35) Nadiem summarizes his thoughts on data product management and adds context on which fields he sees data product managers currently coming from (01:46) Nadiem’s take on whether job listings for data product manager roles still have too many technical requirements (04:27) Why some non-technical people fail when they transition to a data product manager role and the ways Nadiem feels they can bolster their chances of success (07:09) Brian and Nadiem talk about their views on functional data product team models and the process for developing a data product as a team (10:11) When Nadiem feels it makes sense to hire a data product manager and adopt a portfolio view of your data products (16:22) Nadiem’s view on how to prioritize projects as a new data product manager (19:48) Nadiem shares a story of when he took on an interim role as a head of data and how he employed the portfolio strategies he recommends (24:54) How Nadiem evaluates perceived usability of a data product when picking use cases (27:28) Nadiem explains why understanding go-to-market strategy is so critical as a data product manager (30:00) Brian and Nadiem discuss the importance of today’s engineering teams understanding the value and impact of their work (32:09) How Nadiem and his team came up with the idea to develop a SaaS product for data product managers (34:40)

Quotes from Today’s Episode “So, data product management [...] is a combination of different capabilities [...]  [including] product management, design, data science, and machine learning. We covered this in viability, desirability, feasibility, and datability. So, these are four dimensions [that] you combine [...] together to become a data product manager.” — Nadiem von Heydebrand (02:34)

“There is no education for data product management today, there’s no university degree. ... So, there’s nobody out there—from my perspective—who really has all the four dimensions from day one. It’s more like an evolution: you’re coming from one of the [parallel business] domains or from one of the [parallel business] fields and then you extend your skill set over time.” — Nadiem von Heydebrand (03:04)

“If a product manager has very good communication skills and is able to break down the needs in a proper way or in a good understandable way to its tech lead, or its engineering lead or data science lead, then I think it works out super well. If this bridge is missing, then it becomes a little bit tricky because then the distance between the product manager and the development team is too far.” – Nadiem von Heydebrand (09:10)

“I think every data leader out there has an Excel spreadsheet or a list of prioritized use cases or the most relevant use cases for the business strategy… You can think about this list as a portfolio. You know, some of these use cases are super valuable; some of these use cases maybe will not work out, and you have to identify those which are bringing real return on investment when you put effort in there.” – Nadiem von Heydebrand (19:01)

“I’m not a magician for data product management. I just focused on a very strategic view on my portfolio and tried to identify those cases and those data products where I can believe I can easily develop them, I have a high degree of adoption with my lines of business, and I can truly measure the added revenue and the impact.” – Nadiem von Heydebrand (26:31)

“As a true data product manager, from my point of view, you are someone who is empathetic for the lines of businesses, to understand what their underlying needs and what the problems are. At the same time, you are a business person. You try to optimize the portfolio for your own needs, because you have business goals coming from your leadership team, from your head of data, or even from the person above, the CTO, CIO, even CEO. So, you want to make sure that your value contribution is always transparent, and visible, measurable, tangible.” – Nadiem von Heydebrand (29:20)

“If we look into classical product management, I mean, the product manager has to understand how to market and how to go to the market. And it’s this exactly the same situation with data product managers within your organization. You are as successful as your product performs in the market. This is how you measure yourself as a data product manager. This is how you define success for yourself.” – Nadiem von Heydebrand (30:58)

Links Mindfuel: https://mindfuel.ai/ LinkedIn: https://www.linkedin.com/in/nadiemvh/ Delight Software - the SAAS tool for data product managers to manage their portfolio of data products: https://delight.mindfuel.ai

119 - Skills vs. Roles: Data Product Management and Design with Nadiem von Heydebrand (Part 1)

2023-06-13 Listen
podcast_episode

The conversation with my next guest was going so deep and so well…it became a two part episode! Today I’m chatting with Nadiem von Heydebrand, CEO of Mindfuel. Nadiem’s career journey led him from data science to data product management, and in this first, we will focus on the skills of data product management (DPM), including design. In part 2, we jump more into Nadiem’s take on the role of the DPM. Nadiem gives actionable insights into the realities of data product management, from the challenges of actually being able to talk to your end users, to focusing on the problems and unarticulated needs of your users rather than solutions. Nadiem and I also discuss how data product managers oversee a portfolio of initiatives, and why it’s important to view that portfolio as a series of investments. Nadiem also emphasizes the value of having designers on a data team, and why he hopes we see more designers in the industry. 

Highlights/ Skip to:

Brian introduces Nadiem and his background going from data science to data product management (00:36) Nadiem gives not only his definition of a data product, but also his related definitions of ‘data as product,’ ‘data as information,’ and ‘data as a model’ products (02:19) Nadiem outlines the skill set and activities he finds most valuable in a data product manager (05:15) How a data organization typically functions and the challenges a data team faces to prove their value (11:20) Brian and Nadiem discuss the challenges and realities of being able to do discovery with the end users of data products (17:42) Nadiem outlines how a portfolio of data initiatives has a certain investment attached to it and why it’s important to generate a good result from those investments (21:30) Why Nadiem wants to see more designers in the data product space and the problems designers solve for data teams (25:37) Nadiem shares a story about a time when he wished he had a designer to convert the expressed needs of the  business into the true need of the customer (30:10) The value of solving for the unarticulated needs of your product users, and Nadiem shares how focusing on problems rather than solutions helped him (32:32) Nadiem shares how you can connect with him and find out more about his company, Mindfuel (36:07)

Quotes from Today’s Episode “The product mindset already says it quite well. When you look into classical product management, you have something called the viability, the desirability, the feasibility—so these are three very classic dimensions of product management—and the fourth dimension, we at Mindfuel define for ourselves and for applications are, is the datability.” — Nadiem von Heydebrand (06:51)

“We can only prove our [data team’s] value if we unlock business opportunities in their [clients’] lines of businesses. So, our value contribution is indirect. And measuring indirect value contribution is very difficult in organizations.” — Nadiem von Heydebrand (11:57)

“Whenever we think about data and analytics, we put a lot of investment and efforts in the delivery piece. I saw a study once where it said 3% of investments go into discovery and 90% of investments go into delivery and the rest is operations and a little bit overhead and all around. So, we have to balance and we have to do proper discovery to understand what problem do we want to solve.” — Nadiem von Heydebrand (13:59)

“The best initiatives I delivered in my career, and also now within Mindfuel, are the ones where we try to build an end responsibility from the lines of businesses, among the product managers, to PO, the product owner, and then the delivery team.” – Nadiem von Heydebrand (17:00)

“As a consultant, I typically think in solutions. And when we founded Mindfuel, my co-founder forced me to avoid talking about the solution for an entire ten months. So, in whatever meeting we were sitting, I was not allowed to talk about the solution, but only about the problem space.”  – Nadiem von Heydebrand (34:12)

“In scaled organizations, data product managers, they typically run a portfolio of data products, and each single product can be seen a little bit like from an investment point of view, this is where we putting our money in, so that’s the reason why we also have to prioritize the right use cases or product initiatives because typically we have limited resources, either it is investment money, people, resources or our time.” – Nadiem von Heydebrand (24:02)

“Unfortunately, we don’t see enough designers in data organizations yet. So, I would love to have more design people around me in the data organizations, not only from a delivery perspective, having people building amazing dashboards, but also, like, truly helping me in this kind of discovery space.” – Nadiem von Heydebrand (26:28)

Links Mindfuel: https://mindfuel.ai/ Personal LinkedIn: https://www.linkedin.com/in/nadiemvh/ Mindfuel LinkedIn: https://www.linkedin.com/company/mindfuelai/

117 - Phil Harvey, Co-Author of “Data: A Guide to Humans,” on the Non-Technical Skills Needed to Produce Valuable AI Solutions

2023-05-16 Listen
podcast_episode

Today I’m chatting with Phil Harvey, co-author of Data: A Guide to Humans and a technology professional with 23 years of experience working with AI and startups. In his book, Phil describes his philosophy of how empathy leads to more successful outcomes in data product development and the journey he took to arrive at this perspective. But what does empathy mean, and how do you measure its success? Brian and Phil dig into those questions, and Phil explains why he feels cognitive empathy is a learnable skill that one can develop and apply. Phil describes some leading indicators that empathy is needed on a data team, as well as leading indicators that a more empathetic approach to product development is working. While I use the term “design” or “UX” to describe a lot of what Phil is talking about, Phil actually has some strong opinions about UX and shares those on this episode. Phil also reveals why he decided to write Data: A Guide to Humans and some of the experiences that helped shape the book’s philosophy. 

Highlights/ Skip to:

Phil introduces himself and explains how he landed on the name for his book (00:54)  How Phil met his co-author, Noelia Jimenez Martinez, and the reason they started writing Data: A Guide to Humans (02:31) Phil unpacks his understanding of how he defines empathy, why it leads to success on AI projects, and what success means to him (03:54) Phil walks through a couple scenarios where empathy for users and stakeholders was lacking and the impacts it had (07:53) The work Phil has done internally to get comfortable doing the non-technical work required to make ML/AI/data products successful  (13:45) Phil describes some indicators that data teams can look for to know their design strategy is working (17:10) How Phil sees the methodology in his book relating to the world of UX (user experience) design (21:49) Phil walks through what an abstract concept like “empathy” means to him in his work and how it can be learned and applied as a practical skill (29:00)

Quotes from Today’s Episode “If you take success in itself, this is about achieving your intended outcomes. And if you do that with empathy, your outcomes will be aligned to the needs of the people the outcomes are for. Your outcomes will be accepted by stakeholders because they’ll understand them.” — Phil Harvey (05:05)

“Where there’s people not discussing and not considering the needs and feelings of others, you start to get this breakdown, data quality issues, all that.” – Phil Harvey (11:10)

“I wanted to write code; I didn’t want to deal with people. And you feel when you can do technical things, whether it’s machine-learning or these things, you end up with the ‘I’ve got a hammer and now everything looks like a nail problem.’ But you also have the [attitude] that my programming will solve everything.” – Phil Harvey (14:48)

“This is what startup-land really taught me—you can’t do everything. It’s very easy to think that you can and then burn yourself out. You need a team of people.” – Phil Harvey (15:09)

“Let’s listen to the users. Let’s bring that perspective in as opposed to thinking about aligning the two perspectives. Because any product is a change. You don’t ride a horse then jump in a car and expect the car to work like the horse.” – Phil Harvey (22:41)

“Let’s say you’re a leader in this space. … Listen out carefully for who’s complaining about who’s not listening to them. That’s a first early signal that there’s work to be done from an empathy perspective.” – Phil Harvey (25:00)

“The perspective of the book that Noelia and I have written is that empathy—and cognitive empathy particularly—is also a learnable skill. There are concrete and real things you can practice and do to improve in those skills.” – Phil Harvey (29:09)

Links Data: A Guide to Humans: https://www.amazon.com/Data-A-Guide-to-Humans/dp/1783528648 Twitter: https://twitter.com/codebeard LinkedIn: https://www.linkedin.com/in/philipdavidharvey/ Mastodon: https://mastodonapp.uk/@codebeard

114 - Designing Anti-Biasing and Explainability Tools for Data Scientists Creating ML Models with Josh Noble

2023-04-04 Listen
podcast_episode

Today I’m chatting with Josh Noble, Principal User Researcher at TruEra. TruEra is working to improve AI quality by developing products that help data scientists and machine learning engineers improve their AI/ML models by combatting things like bias and improving explainability. Throughout our conversation, Josh—who also used to work as a Design Lead at IDEO.org—explains the unique challenges and importance of doing design and user research, even for technical users such as data scientists. He also shares tangible insights on what informs his product design strategy, the importance of measuring product success accurately, and the importance of understanding the current state of a solution when trying to improve it.

Highlights/ Skip to:

Josh introduces himself and explains why it’s important to do design and user research work for technical tools used by data scientists (00:43) The work that TruEra does to mitigate bias in AI as well as their broader focus on AI quality management (05:10) Josh describes how user roles informed TruEra’s design their upcoming monitoring product, and the emphasis he places on iterating with users (10:24)  How Josh approaches striking a balance between displaying extraneous information in the tools he designs vs. removing explainability (14:28) Josh explains how TruEra measures product success now and how they envision that changing in the future (17:59) The difference Josh sees between explainability and interpretability (26:56) How Josh decided to go from being a designer to getting a data science degree (31:08) Josh gives his take on what skills are most valuable as a designer and how to develop them (36:12)

Quotes from Today’s Episode “We want to make machine learning better by testing it, helping people analyze it, helping people monitor models. Bias and fairness is an important part of that, as is accuracy, as is explainability, and as is more broadly AI quality.” — Josh Noble (05:13)

“These two groups, the data scientists and the machine-learning engineer, they think quite differently about the problems that they need to solve. And they have very different toolsets. … Looking at how we can think about making a product and building tools that make sense to both of those different groups is a really important part of user experience.” – Josh Noble (09:04)

“I’m a big advocate for iterating with users. To the degree possible, get things in front of people so they can tell you whether it works for them or not, whether it fits their expectations or not.” – Josh Noble (12:15)

“Our goal is to get people to think about AI quality differently, not to necessarily change. We don’t want to change their performance metrics. We don’t want to make them change how they calculate something or change a workflow that works for them. We just want to get them to a place where they can bring together our four pillars and build better models and build better AI.” – Josh Noble (17:38)

“I’ve always wanted to know what was going on underneath the design. I think it’s an important part of designing anything to understand how the thing that you are making is actually built.” – Josh Noble (31:56)

“There’s a empathy-building exercise that comes from using these tools and understanding where they come from. I do understand the argument that some designers make. If you want to find a better way to do something, spending a ton of time in the trenches of the current way that it’s done is not always the solution, right?” – Josh Noble (36:12)

“There’s a real empathy that you build and understanding that you build from seeing how your designs are actually implemented that makes you a better teammate. It makes you a better collaborator and ultimately, I think, makes you a better designer because of that.” – Josh Noble (36:46)

“I would say to the non-designers who work with designers, measuring designs is not invalidating the designer. It doesn’t invalidate the craft of design. It shouldn’t be something that designers are hesitant to do. I think it’s really important to understand in a qualitative way what your design is doing and understand in a quantitative way what your design is doing.” – Josh Noble (38:18)

Links Truera: https://truera.com/ Medium: https://medium.com/@fctry2