talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (135 results)

See all 135 →
Showing 16 results

Activities & events

Title & Speakers Event
Brian T. O’Neill – host , Kate O’Neill – author and speaker

In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions. 

Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps.

Highlights/ Skip to:

How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03) Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58)  FOMO and the “Solution in Search of a Problem” problem in Data (5:18)  Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21)  Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09) How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57) Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54)

Quotes from Today’s Episode "I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do." –Kate O’Neill

" I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously." –Kate O’Neill

"The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it." –Kate O’Neill

“We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.” 

–Kate O’Neill

"My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders.  There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.”

–Brian T. O’Neill

Links

KO Insights: https://www.koinsights.com/ LinkedIn for Kate O’Neill: https://www.linkedin.com/in/kateoneill/ Kate O’Neill Book: What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast

AI/ML Cloud Computing SaaS
Mark Ramsey – guest @ Ramsey International , Brian O’Neill – Podcast host @ Designing for Analytics

“Last week was a great year in GenAI,” jokes Mark Ramsey—and it’s a great philosophy to have as LLM tools especially continue to evolve at such a rapid rate. This week, you’ll get to hear my fun and insightful chat with Mark from Ramsey International about the world of large language models (LLMs) and how we make useful UXs out of them in the enterprise. 

Mark shared some fascinating insights about using a company’s website information (data) as a place to pilot a LLM project, avoiding privacy landmines, and how re-ranking of models leads to better LLM response accuracy. We also talked about the importance of real human testing to ensure LLM chatbots and AI tools truly delight users. From amusing anecdotes about the spinning beach ball on macOS to envisioning a future where AI-driven chat interfaces outshine traditional BI tools, this episode is packed with forward-looking ideas and a touch of humor.

Highlights/ Skip to:

(0:50) Why is the world of GenAI evolving so fast? (4:20) How Mark thinks about UX in an LLM application (8:11) How Mark defines “Specialized GenAI?” (12:42) Mark’s consulting work with GenAI / LLMs these days (17:29) How GenAI can help the healthcare industry (30:23) Uncovering users’ true feelings about LLM applications (35:02) Are UIs moving backwards as models progress forward? (40:53) How will GenAI impact data and analytics teams? (44:51) Will LLMs be able to consistently leverage RAG and produce proper SQL? (51:04) Where can find more from Mark and Ramsey International

Quotes from Today’s Episode “With [GenAI], we have a solution that we’ve built to try to help organizations, and build workflows. We have a workflow that we can run and ask the same question [to a variety of GenAI models] and see how similar the answers are. Depending on the complexity of the question, you can see a lot of variability between the models… [and] we can also run the same question against the different versions of the model and see how it’s improved. Folks want a human-like experience interacting with these models.. [and] if the model can start responding in just a few seconds, that gives you much more of a conversational type of experience.” - Mark Ramsey (2:38) “[People] don’t understand when you interact [with GenAI tools] and it brings tokens back in that streaming fashion, you’re actually seeing inside the brain of the model. Every token it produces is then displayed on the screen, and it gives you that typewriter experience back in the day. If someone has to wait, and all you’re seeing is a logo spinning, from a UX experience standpoint… people feel like the model is much faster if it just starts to produce those results in that streaming fashion. I think in a design, it’s extremely important to take advantage of that [...] as opposed to waiting to the end and delivering the results some models support that, and other models don’t.”- Mark Ramsey (4:35) "All of the data that’s on the website is public information. We’ve done work with several organizations on quickly taking the data that’s on their website, packaging it up into a vector database, and making that be the source for questions that their customers can ask. [Organizations] publish a lot of information on their websites, but people really struggle to get to it. We’ve seen a lot of interest in vectorizing website data, making it available, and having a chat interface for the customer. The customer can ask questions, and it will take them directly to the answer, and then they can use the website as the source information.” - Mark Ramsey (14:04) “I’m not skeptical at all. I’ve changed much of my [AI chatbot searches] to Perplexity, and I think it’s doing a pretty fantastic job overall in terms of quality. It’s returning an answer with citations, so you have a sense of where it’s sourcing the information from. I think it’s important from a user experience perspective. This is a replacement for broken search, as I really don’t want to read all the web pages and PDFs you have that might be about my chiropractic care query to answer my actual [healthcare] question.” - Brian O’Neill (19:22)

“We’ve all had great experience with customer service, and we’ve all had situations where the customer service was quite poor, and we’re going to have that same thing as we begin to [release more] chatbots. We need to make sure we try to alleviate having those bad experiences, and have an exit. If someone is running into a situation where they’d rather talk to a live person, have that ability to route them to someone else. That’s why the robustness of the model is extremely important in the implementation… and right now, organizations like OpenAI and Anthropic are significantly better at that [human-like] experience.” - Mark Ramsey (23:46) "There’s two aspects of these models: the training aspect and then using the model to answer questions. I recommend to organizations to always augment their content and don’t just use the training data. You’ll still get that human-like experience that’s built into the model, but you’ll eliminate the hallucinations. If you have a model that has been set up correctly, you shouldn’t have to ask questions in a funky way to get answers.” - Mark Ramsey (39:11) “People need to understand GenAI is not a predictive algorithm. It is not able to run predictions, it struggles with some math, so that is not the focus for these models. What’s interesting is that you can use the model as a step to get you [the answers]. A lot of the models now support functions… when you ask a question about something that is in a database, it actually uses its knowledge about the schema of the database. It can build the query, run the query to get the data back, and then once it has the data, it can reformat the data into something that is a good response back." - Mark Ramsey (42:02)

Links Mark on LinkedIn Ramsey International Email: mark [at] ramsey.international Ramsey International's YouTube Channel

AI/ML Analytics BI GenAI LLM RAG SQL Data Streaming Vector DB

Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome  Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!) 

Highlights/ Skip to:

(1:05) I introduce a hypothetical  internal LLM tool and what the goal of the tool is for the team who would use it  (5:31) Improving access to primary research findings for better UX  (10:19) What “quality data” means in a UX context (12:18) When LLM accuracy maybe doesn’t matter as much (14:03) How AI and LLMs are opening the door for fresh visioning work (15:38) Brian’s overall take on LLMs inside enterprise software as of right now (18:56) Final thoughts on UX design for LLMs, particularly in the enterprise (20:25) My inspiration for these 2 episodes—and how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their website

Quotes from Today’s Episode “If we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.” - Brian T. O’Neill (8:09) “What’s in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word quality mean from a product standpoint and a risk reduction one, as seen from an end-users’ perspective? Somebody who’s trying to get work done? This is a different type of quality measurement.” - Brian T. O’Neill (10:40)

“When we think about fact retrieval use cases in particular, how easily can product teams—internal or otherwise—and end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the model’s responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that we’re playing.” - Brian T. O’Neill (11:22) “As somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting started—the blank page—and this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where they’re doing truly generative or creative work—such that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.” - Brian T. O’Neill (13:31) “One thing I actually like about the hype, investment, and excitement around GenAI and LLMs in the enterprise is that there is an opportunity for organizations here to do some fresh visioning work. And this is a place that designers and user experience professionals can help data teams as we bring design into the AI space.” - Brian T. O’Neill (14:04)

“If there was ever a time to do some new visioning work, I think now is one of those times. However, we need highly skilled design leaders to help facilitate this in order for this to be effective. Part of that skill is knowing who to include in exercises like this, and my perspective, one of those people, for sure, should be somebody who understands the data science side as well, not just the engineering perspective. And as I posited in my seminar that I teach, the AI and analytical data product teams probably need a fourth member. It’s a quartet and not a trio. And that quartet includes a data expert, as well as that engineering lead.” - Brian T. O’Neill (14:38)

Links Perplexity.ai: https://perplexity.ai  Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586  My article that inspired this episode

AI/ML Data Quality Data Science GenAI LLM

Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks. 

I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.” 

Highlights/ Skip to:

(1:15) Currently, many LLM feature  initiatives seem to mostly driven by FOMO  (2:45) UX Considerations for LLM-enhanced enterprise applications  (5:14) Challenges with LLM UIs / user interfaces (7:24) Measuring improvement in UX outcomes with LLMs (10:36) Accuracy in LLMs and its relevance in enterprise software  (11:28) Illustrating key consideration for implementing an LLM-based feature (19:00) Leadership and context in AI deployment (19:27) Determining UX benchmarks for using LLMs (20:14) The dynamic nature of LLM hallucinations and how we design for the unknown (21:16) Closing thoughts on Part 1 of designing for AI and LLMs

Quotes from Today’s Episode

“While many product teams continue to race to deploy some sort of GenAI and especially LLMs into their products—particularly this is in the tech sector for commercial software companies—the general sense I’m getting is that this is still more about FOMO than anything else.” - Brian T. O’Neill (2:07) “No matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.” - Brian T. O’Neill (3:03)

“So, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while we’ve given users tremendous flexibility in the form of a Google search-like interface, we’ve also in many cases, limited the UX of these interactions to a text conversation with a machine. We’re back to the CLI in some ways.” - Brian T. O’Neill (5:14) “Before and after we insert an LLM into a user’s workflow, we need to know what an improvement in their life or work actually means.”- Brian T. O’Neill (7:24) "If it would take the machine a few seconds to process a result versus what might take a day for a worker, what’s the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if you’re concerned about adoption, which a lot of data product leaders are." - Brian T. O’Neill (10:17)

“So, there’s no right or wrong answer here. These are all range questions, and they’re leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much of the time.” - Brian T. O’Neill (19:00)

Links

View Part 1 of my article on UI/UX design considerations for LLMs in enterprise applications:  https://designingforanalytics.com/resources/ui-ux-design-for-enterprise-llms-use-cases-and-considerations-for-data-and-product-leaders-in-2024-part-1/

AI/ML GenAI LLM Plausible

Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! 

According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. 

Highlights/ Skip to:

Malcolm’s definition of a data product (2:10) Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34) How product makers can gain access to users to build more successful products (11:36)  Answering the UX question to get past the adoption stage and provide business value (16:03) Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07) What people really mean by “data culture" (23:02) Malcolm’s data product journey and his changing perspective (32:05) Using empathy to provide a better UX in design and data (39:24) Avoiding the death of data science by becoming more product-driven (46:23) Where the majority of data professionals currently land on their view of product management for data products (48:15)

Quotes from Today’s Episode “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42) “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36)

“So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42)

“We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07)

“I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59)

“But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57)

“So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what we’re doing is a time waster?” - Malcolm Hawker (48:00)

Links Profisee: https://profisee.com/  LinkedIn: https://www.linkedin.com/in/malhawker/  CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/

AI/ML Analytics Dashboard Data Management Data Science
Shashank Garg – Co-Founder and CEO @ Infocepts , Brian O’Neill – Podcast host @ Designing for Analytics

Welcome to another curated, Promoted Episode of Experiencing Data! 

In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflows—and as I often do, I started out by asking Shashank how he and Infocepts define the term “data product.” We discuss a few real-world applications Infocepts has built, and the measurable impact of these data products—as well as some of the challenges they’ve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI.

Highlights/ Skip to

Shashank gives his definition of data products  (01:24) We tackle the challenges of user adoption in data products (04:29) We discuss the crucial role of integrating actionable insights into data products for enhanced decision-making (05:47) Shashank shares insights on the evolution of data products from concept to practical integration (10:35) We explore the challenges and strategies in designing user-centric data products (12:30) I ask Shashank about typical environments and challenges when starting new data product consultations (15:57) Shashank explains how Infocepts incorporates AI into their data solutions (18:55) We discuss the importance of understanding user personas and engaging with actual users (25:06) Shashank describes the roles involved in data product development’s ideation and brainstorming stages (32:20) The issue of proxy users not truly representing end-users in data product design is examined (35:47) We consider how organizations are adopting a product-oriented approach to their data strategies (39:48) Shashank and I delve into the implications of GenAI and other AI technologies on product orientation and user adoption (43:47) Closing thoughts (51:00)

Quotes from Today’s Episode

“Data products, at least to us at Infocepts, refers to a way of thinking about and organizing your data in a way so that it drives consumption, and most importantly, actions.” - Shashank Garg (1:44) “The way I see it is [that] the role of a DPM (data product manager)—whether they have the title or not—is benefits creation. You need to be responsible for benefits, not for outputs. The outputs have to create benefits or it doesn’t count. Game over” - Brian O’Neill (10:07) We talk about bridging the gap between the worlds of business and analytics... There's a huge gap between the perception of users and the tech leaders who are producing it." - Shashank Garg (17:37) “IT leaders often limit their roles to provisioning their secure data, and then they rely on businesses to be able to generate insights and take actions. Sometimes this handoff works, and sometimes it doesn’t because of quality governance.” - Shashank Garg  (23:02) “Data is the kind of field where people can react very, very quickly to what’s wrong.”  - Shashank Garg (29:44) “It’s much easier to get to a good prototype if we know what the inputs to a prototype are, which include data about the people who are going to use the solution, their usage scenarios, use cases, attitudes, beliefs…all these kinds of things.” - Brian O’Neill (31:49) “For data, you need a separate person, and then for designing, you need a separate person, and for analysis, you need a separate person—the more you can combine, I don’t think you can create super-humans who can do all three, four disciplines, but at least two disciplines and can appreciate the third one that makes it easier.” - Shashank Garg (39:20) “When we think of AI, we’re all talking about multiple different delivery methods here. I think AI is starting to become GenAI to a lot of non-data people. It’s like their—everything is GenAI.” -  Brian O'Neill (43:48)

Links

Infocepts website: https://www.infocepts.ai/ Shashank Garg on LinkedIn: https://www.linkedin.com/in/shashankgarg/  Top 5 Data & AI initiatives for business success: https://www.infocepts.ai/downloads/top-5-data-and-ai-initiatives-to-drive-business-growth-in-2024-beyond/

AI/ML Analytics GenAI React
Brian O’Neill – Podcast host @ Designing for Analytics

Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you! 

Highlights/ Skip to 

I discuss how specific UI/UX design problems can significantly impact business performance (02:51) I discuss five common reasons why enterprise software leaders typically reach out for help (04:39) The 20 common symptoms I've observed in client engagements that indicate the need for professional UI/UX intervention or training (13:22) The dangers of adding too many features or customization and how it can overwhelm users (16:00) The issues of integrating  AI into user interfaces and UXs without proper design thinking  (30:08) I encourage listeners to apply the insights shared to improve their data products (48:02)

Quotes from Today’s Episode “One of the problems with bad design is that some of it we can see and some of it we can't — unless you know what you're looking for." - Brian O’Neill (02:23) “Design is usually not top of mind for an enterprise software product, especially one in the machine learning and analytics space. However, if you have human users, even enterprise ones, their tolerance for bad software is much lower today than in the past.” Brian O’Neill - (13:04) “Early on when you're trying to get product market fit, you can't be everything for everyone. You need to be an A+ experience for the person you're trying to satisfy.” -Brian O’Neill (15:39) “Often when I see customization, it is mostly used as a crutch for not making real product strategy and design decisions.”  - Brian O’Neill (16:04)  "Customization of data and dashboard products may be more of a tax than a benefit. In the marketing copy, customization sounds like a benefit...until you actually go in and try to do it. It puts the mental effort to design a good solution on the user." - Brian O’Neill (16:26) “We need to think strategically when implementing Gen AI or just AI in general into the product UX because it won’t automatically help drive sales or increase business value.” - Brian O’Neill (20:50)  “A lot of times our analytics and machine learning tools… are insight decision support products. They're supposed to be rooted in facts and data, but when it comes to designing these products, there's not a whole lot of data and facts that are actually informing the product design choices.” Brian O’Neill - (30:37) “If your IP is that special, but also complex, it needs the proper UI/UX design treatment so that the value can be surfaced in such a way someone is willing to pay for it if not also find it indispensable and delightful.” - Brian O’Neill (45:02)

Links The (5) big reasons AI/ML and analytics product leaders invest in UI/UX design help: https://designingforanalytics.com/resources/the-5-big-reasons-ai-ml-and-analytics-product-leaders-invest-in-ui-ux-design-help/  Subscribe for free insights on designing useful, high-value enterprise ML and analytical data products: https://designingforanalytics.com/list  Access my free frameworks, guides, and additional reading for SAAS leaders on designing high-value ML and analytical data products: https://designingforanalytics.com/resources Need help getting your product’s design/UX on track—so you can see more sales, less churn, and higher user adoption? Schedule a free 60-minute Discovery Call with me and I’ll give you my read on your situation and my recommendations to get ahead:https://designingforanalytics.com/services/

AI/ML Analytics Dashboard GenAI Marketing SaaS
Brian T. O’Neill – host , Chris Hill – CEO @ Humblepod

Welcome to a special edition of Experiencing Data. This episode is the audio capture from a live Crowdcast video webinar I gave on April 26th, 2024 where I conducted a mini UI/UX design audit of a new podcast analytics service that Chris Hill, CEO of Humblepod, is working on to help podcast hosts grow their show. Humblepod is also the team-behind-the-scenes of Experiencing Data, and Chris had asked me to take a look at his new “Listener Lifecycle” tool to see if we could find ways to improve the UX and visualizations in the tool, how we might productize this MVP in the future, and how improving the tool’s design might help Chris help his prospective podcast clients learn how their listener data could help them grow their listenership and “true fans.”

On a personal note, it was fun to talk to Chris on the show given we speak every week:  Humblepod has been my trusted resource for audio mixing, transcription, and show note summarizing for probably over 100 of the most recent episodes of Experiencing Data. It was also fun to do a “live recording” with an audience—and we did answer questions in the full video version. (If you missed the invite, join my Insights mailing list to get notified of future free webinars).

To watch the full audio and video recording on Crowdcast, free, head over to: https://www.crowdcast.io/c/podcast-analytics-ui-ux-design

Highlights/ Skip to: Chris talks about using data to improve podcasts and his approach to podcast numbers  (03:06) Chris introduces the Listener Lifecycle model which informed the dashboard design (08:17) Chris and I discuss the importance of labeling and terminology in analytics UIs (11:00) We discuss designing for practical use of analytics dashboards to provide actionable insights (17:05) We discuss the challenges podcast hosts face in understanding and utilizing data effectively and how design might help (21:44) I discuss how my CED UX framework for advanced analytics applications helps to facilitate actionable insights (24:37) I highlight the importance of presenting data effectively and in a way that centers to user needs (28:50) I express challenges users may have with podcast rankings and the reliability of data sources (34:24)  Chris and I discuss tailoring data reports to meet the specific needs of clients (37:14)

Quotes from Today’s Episode “The irony for me as someone who has a podcast about machine learning and analytics and design is that I basically never look at my analytics.” - Brian O’Neill (01:14) “The problem that I have found in podcasting is that the number that everybody uses to gauge whether a podcast is good or not is the download number…But there’s a lot of other factors in a podcast that can tell you how successful it’s going to be…where you can pull levers to…grow your show, or engage more with an audience.” - Chris Hill (03:20) “I have a framework for user experience design for analytics called CED, which stands for Conclusions, Evidence, Data… The basic idea is really simple: lead your analytic service with conclusions.”- Brian O’Neill (24:37) “Where the eyes glaze over is when tools are mostly about evidence generators, and we just give everybody the evidence, but there’s no actual analysis about how [this is] helping me improve my life or my business. It’s just evidence. I need someone to put that together.” - Brian O’Neill (25:23) “Sometimes the data doesn’t provide enough of a conclusion about what to do…This is where your opinion starts to matter” - Brian O’Neill (26:07) “It sounds like a benefit, but drilling down for most people into analytics stuff is usually a tax unless you’re an analyst.” - Brian O’Neill (27:39) “Where’s the source of this data, and who decided what these numbers are? Because so much of this stuff…is not shared. As someone who’s in this space, it’s not even that it’s confusing. It’s more like, you got to distill this down for me.” - Brian O’Neill (34:57) “Your clients are probably going to glaze over at this level of data because it’s not helping them make any decision about what to change.”- Brian O’Neill (37:53)

Links Watch the original Crowdcast video recording of this episode Brian’s CED UX Framework for Advanced Analytics Solutions Join Brian’s Insights mailing list

AI/ML Analytics Dashboard
Duncan Milne – Director, Data Investment & Product Management @ Royal Bank of Canada (RBC) , Brian T. O’Neill – host

In this week's episode of Experiencing Data, I'm joined by Duncan Milne, a Director, Data Investment & Product Management at the Royal Bank of Canada (RBC). Today, Duncan (who is also a member of the DPLC) gives a preview of his upcoming webinar on April 24, 2024 entitled, “Is that Data Product Worth Building? Estimating Economic Value…Before You Build It!”  Duncan shares his experience of implementing a product mindset within RBC's Chief Data Office, and he explains some of the challenges, successes, and insights gained along the way. He emphasizes the critical role of understanding user needs and evaluating the economic impact of data products—before they are built. Duncan was gracious to let us peek inside and see a transformation that is currently in progress and I’m excited to check out his webinar this month!

Highlights/ Skip to:

I introduce Duncan Milne from RBC (00:00) Duncan outlines the Chief Data Office's function at RBC  (01:01) We discuss data products and how they are used to improve business process (04:05) The genesis behind RBC's move towards a product-centric approach in handling data, highlighting initial challenges and strategies for fostering a product mindset (07:26) Duncan discusses developing a framework to guide the lifecycle of data products at RBC (09:29) Duncan addresses initial resistance and adaptation strategies for engaging teams in a new product-centric methodology (12:04) The scaling challenges of applying a product mindset across a large organization like RBC (22:02) Insights into the framework for evaluating and prioritizing data product ideas based on their desirability, usability, feasibility, and viability. (26:30) Measuring success and value in data product management (30:45) Duncan explores process mapping challenges in banking (34:13) Duncan shares creating specialized training for data product management at RBC (36:39) Duncan offers advice and closing thoughts on data product management (41:38)

Quotes from Today’s Episode “We think about data products as anything that solves a problem using data... it's helping someone do something they already do or want to do faster and better using data." - Duncan Milne (04:29) “The transition to data product management involves overcoming initial resistance by demonstrating the tangible value of this approach." - Duncan Milne (08:38) "You have to want to show up and do this kind of work [adopting a product mindset in data product management]…even if you do a product the right way, it doesn’t always work, right? The thing you make may not be desirable, it may not be as usable as it needs to be. It can be technically right and still fail. It’s not a guarantee, it’s just a better way of working.” - Brian T. O’Neill (15:03) “[Product management]... it's like baking versus cooking. Baking is a science... cooking is much more flexible. It’s about... did we produce a benefit for users? Did we produce an economic benefit? ...It’s a multivariate problem... a lot of it is experimentation and figuring out what works." - Brian T. O'Neill (23:03) "The easy thing to measure [in product management] is did you follow the process or not? That is not the point of product management at all. It's about delivering benefits to the stakeholders and to the customer." - Brian O'Neill (25:16) “Data product is not something that is set in stone... You can leverage learnings from a more traditional product approach, but don’t be afraid to improvise." - Duncan Milne (41:38) “Data products are fundamentally different from digital products, so even the traditional approach to product management in that space doesn’t necessarily work within the data products construct.” - Duncan Milne (41:55) “There is no textbook for data product management; the field is still being developed…don’t be afraid to create your own answer if what exists out there doesn’t necessarily work within your context.”- Duncan Milne (42:17)

Links Duncan’s Linkedin: https://www.linkedin.com/in/duncanwmilne/?originalSubdomain=ca

Thabata Romanowski – Data visualization and information design consultant; former data analyst @ Data Rocks NZ , Brian T. O’Neill – host

This week on Experiencing Data, I chat with a new kindred spirit! Recently, I connected with Thabata Romanowski—better known as "T from Data Rocks NZ"—to discuss her experience applying UX design principles to modern analytical data products and dashboards. T walks us through her experience working as a data analyst in the mining sector, sharing the journey of how these experiences laid the foundation for her transition to data visualization. Now, she specializes in transforming complex, industry-specific data sets into intuitive, user-friendly visual representations, and addresses the challenges faced by the analytics teams she supports through her design business. T and I tackle common misconceptions about design in the analytics field, discuss how we communicate and educate non-designers on applying UX design principles to their dashboard and application design work, and address the problem with "pretty charts." We also explore some of the core ideas in T's Design Manifesto, including principles like being purposeful, context-sensitive, collaborative, and humanistic—all aimed at increasing user adoption and business value by improving UX.

Highlights/ Skip to:

I welcome T from Data Rocks NZ onto the show (00:00) T's transition from mining to leading an information design and data visualization consultancy. (01:43) T discusses the critical role of clear communication in data design solutions. (03:39) We address the misconceptions around the role of design in data analytics. (06:54)  T explains the importance of journey mapping in understanding users' needs. (15:25) We discuss the challenges of accurately capturing end-user needs. (19:00)  T and I discuss the importance of talking directly to end-users when developing data products. (25:56)  T shares her 'I like, I wish, I wonder' method for eliciting genuine user feedback. (33:03) T discusses her Data Design Manifesto for creating purposeful, context-aware, collaborative, and human-centered design principles in data. (36:37) We wrap up the conversation and share ways to connect with T. (40:49)

Quotes from Today’s Episode "It's not so much that people…don't know what design is, it's more that they understand it differently from what it can actually do..." - T from Data Rocks NZ (06:59) "I think [misconception about design in technology] is rooted mainly in the fact that data has been very tied to IT teams, to technology teams, and they’re not always up to what design actually does.” - T from Data Rocks NZ (07:42)  “If you strip design of function, it becomes art. So, it’s not art… it’s about being functional and being useful in helping people.” - T from Data Rocks NZ (09:06)

"It’s not that people don’t know, really, that the word design exists, or that design applies to analytics and whatnot; it’s more that they have this misunderstanding that it’s about making things look a certain way, when in fact... It’s about function. It’s about helping people do stuff better." - T from Data Rocks NZ (09:19) “Journey Mapping means that you have to talk to people...  Data is an inherently human thing. It is something that we create ourselves. So, it’s biased from the start. You can’t fully remove the human from the data" - T from Data Rocks NZ (15:36)  “The biggest part of your data product success…happens outside of your technology and outside of your actual analysis. It’s defining who your audience is, what the context of this audience is, and to which purpose do they need that product. - T from Data Rocks NZ (19:08) “[In UX research], a tight, empowered product team needs regular exposure to end customers; there’s nothing that can replace that." - Brian O'Neill (25:58)

“You have two sides [end-users and data team]  that are frustrated with the same thing. The side who asked wasn’t really sure what to ask. And then the data team gets frustrated because the users don’t know what they want…Nobody really understood what the problem is. There’s a lot of assumptions happening there. And this is one of the hardest things to let go.” - T from Data Rocks NZ (29:38) “No piece of data product exists in isolation, so understanding what people do with it… is really important.” - T from Data Rocks NZ (38:51)

Links Design Matters Newsletter: https://buttondown.email/datarocksnz  Website: https://www.datarocks.co.nz/ LinkedIn: https://www.linkedin.com/company/datarocksnz/ BlueSky: https://bsky.app/profile/datarocksnz.bsky.social Mastodon: https://me.dm/@datarocksnz

Analytics Dashboard Data Analytics DataViz
Zalak Trivedi – Product Lead for embedded analytics and reporting @ Sigma Computing , Brian O’Neill – Podcast host @ Designing for Analytics

This week on Experiencing Data, something new as promised at the beginning of the year. Today, I’m exploring the world of embedded analytics with Zalak Trivedi from Sigma Computing—and this is also the first approved Promoted Episode on the podcast. In today’s episode, Zalak shares his journey as the product lead for Sigma’s embedded analytics and reporting solution which seeks to accelerate and simplify the deployment of decision support dashboards to their SAAS companies’ customers. Right there, we have the first challenge that Zalak was willing to dig into with me: designing a platform UX when we have multiple stakeholder and user types. In Sigma’s case, this means Sigma’s buyers, the developers that work at these SAAS companies to integrate Sigma into their products, and then the actual customers of these SAAS companies who will be the final end users of the resulting dashboards.  also discuss the challenges of creating products that serve both beginners and experts and how AI is being used in the BI industry.  

Highlights/ Skip to:

I introduce Zalak Trivedi from Sigma Computing onto the show (03:15) Zalak shares his journey leading the vision for embedded analytics at Sigma and explains what Sigma looks like when implemented into a customer’s SAAS product . (03:54) Zalak and I discuss the challenge of integrating Sigma's analytics into various companies' software, since they need to account for a variety of stakeholders. (09:53) We explore Sigma's team approach to user experience with product management, design, and technical writing (15:14) Zalak reveals how Sigma leverages telemetry to understand and improve user interactions with their products (19:54) Zalak outlines why Sigma is a faster and more supportive alternative to building your own analytics (27:21) We cover data monetization, specifically looking at how SAAS companies can monetize analytics and insights (32:05) Zalak highlights how Sigma is integratingAI into their BI solution (36:15) Zalak share his customers' current pain points and interests (40:25)  We wrap up with final thoughts and ways to connect with Zalak and learn more about Sigma (49:41) 

Quotes from Today’s Episode "Something I’m really excited about personally that we are working on is [moving] beyond analytics to help customers build entire data applications within Sigma. This is something we are really excited about as a company, and marching towards [achieving] this year." - Zalak Trivedi (04:04)

“The whole point of an embedded analytics application is that it should look and feel exactly like the application it’s embedded in, and the workflow should be seamless.” - Zalak Trivedi (09:29) 

“We [at Sigma] had to switch the way that we were thinking about personas. It was not just about the analysts or the data teams; it was more about how do we give the right tools to the [SAAS] product managers and developers to embed Sigma into their product.” - Zalak Trivedi (11:30)  “You can’t not have a design, and you can’t not have a user experience. There’s always an experience with every tool, solution, product that we use, whether it emerged organically as a byproduct, or it was intentionally created through knowledge data... it was intentional” - Brian O’Neill (14:52) 

“If we find that [in] certain user experiences,people are tripping up, and they’re not able to complete an entire workflow, we flag that, and then we work with the product managers, or [with] our customers essentially, and figure out how we can actually simplify these experiences.” - Zalak Trivedi (20:54)

“We were able to convince many small to medium businesses and startups to sign up with Sigma. The success they experienced after embedding Sigma was tremendous. Many of our customers managed to monetize their existing data within weeks, or at most, a couple of months, with lean development teams of two to three developers and a few business-side personnel, generating seven-figure income streams from that.” - Zalak Trivedi (32:05)

“At Sigma, our stance is, let’s not just add AI for the sake of adding AI. Let’s really identify [where] in the entire user journey does the intelligence really lie, and where are the different friction points, and let’s enhance those experiences.” - Zalak Trivedi (37:38)  “Every time [we at Sigma Computing] think about a new feature or functionality, we have to ensure it works for both the first-degree persona and the second-degree persona, and consider how it will be viewed by these different personas, because that is not the primary persona for which the foundation of the product was built." - Zalak Trivedi (48:08)

Links Sigma Computing: https://sigmacomputing.com

Email: [email protected] 

LinkedIn: https://www.linkedin.com/in/trivedizalak/

Sigma Computing Embedded: https://sigmacomputing.com/embedded

About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted

AI/ML Analytics BI SaaS

This week I’m covering Part 1 of the 15 Ways to Increase User Adoption of Data Products, which is based on an article I wrote for subscribers of my mailing list. Throughout this episode, I describe why focusing on empathy, outcomes, and user experience leads to not only better data products, but also better business outcomes. The focus of this episode is to show you that it’s completely possible to take a human-centered approach to data product development without mandating behavioral changes, and to show how this approach benefits not just end users, but also the businesses and employees creating these data products. 

Highlights/ Skip to:

Design behavior change into the data product. (05:34) Establish a weekly habit of exposing technical and non-technical members of the data team directly to end users of solutions - no gatekeepers allowed. (08:12) Change funding models to fund problems, not specific solutions, so that your data product teams are invested in solving real problems. (13:30) Hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders. Reject projects that have vague outcomes defined. (16:49) Approach the creation of data products as “user experiences” instead of a “thing” that is being built that has different quality attributes. (20:16) If the team is tasked with being “innovative,” leaders need to understand the innoficiency problem, shortened iterations, and the importance of generating a volume of ideas (bad and good) before committing to a final direction. (23:08) Co-design solutions with [not for!] end users in low, throw-away fidelity, refining success criteria for usability and utility as the solution evolves. Embrace the idea that research/design/build/test is not a linear process. (28:13) Test (validate) solutions with users early, before committing to releasing them, but with a pre-commitment to react to the insights you get back from the test. (31:50)

Links:

15 Ways to Increase Adoption of Data Products: https://designingforanalytics.com/resources/15-ways-to-increase-adoption-of-data-products-using-techniques-from-ux-design-product-management-and-beyond/ Company website: https://designingforanalytics.com Episode 54: https://designingforanalytics.com/resources/episodes/054-jared-spool-on-designing-innovative-ml-ai-and-analytics-user-experiences/ Episode 106: https://designingforanalytics.com/resources/episodes/106-ideaflow-applying-the-practice-of-design-and-innovation-to-internal-data-products-w-jeremy-utley/ Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586/ Podcast website: https://designingforanalytics.com/podcast

AI/ML Analytics React

Today I’m wrapping up my observations from the CDOIQ Symposium and sharing what’s new in the world of data. I was only able to attend a handful of sessions, but they were primarily ones tied to the topic of data products, which, of course, brings us to “What’s a data product?” During this episode, I cover some of what I’ve been hearing about the definition of this word, and I also share my revised v2 definition. I also walk through some of the questions that CDOs and fellow attendees were asking at the sessions I went to and a few reactions to those questions. Finally, I announce an exciting development on the launch of the Data Product Leadership Community.

Highlights/ Skip to:

Brian introduces the topic for this episode, including his wrap-up of the CDOIQ Symposium (00:29) The general impressions Brian heard at the Symposium, including a focus on people & culture and an emphasis on data products (01:51) The three main areas the definition of a data product covers according to Brian’s observations (04:43) Brian describes how companies are looking for successful data product development models to follow and explores where new Data Product Managers are coming from (07:17) A methodology that Brian feels leads to a successful data product team (10:14) How Brian feels digital-native folks see the world of data products differently (11:29) The topic of Data Mesh and Human-Centered Design and how it came up in two presentations at the CDOIQ Symposium (13:24) The rarity of design and UX being talked about at data conferences, and why Brian feels that is the case (15:24) Brian’s current definition of a data product and how it’s evolved from his V1 definition (18:43) Brian lists the main questions that were being asked at CDOIQ sessions he attended around data products (22:19) Where to find answers to many of the questions being asked about data products and an update on the Data Product Leader Community that he will launch in August 2023 (24:28)

Quotes from Today’s Episode “I think generally what’s happening is the technology continues to evolve, I think it generally continues to get easier, and all of the people and cultural parts and the change management and all of that, that problem just persists no matter what. And so, I guess the question is, what are we going to do about it?” — Brian T. O’Neill (03:11)

“The feeling I got from the questions [at the CDOIQ Symposium], … and particularly the ones that were talking about the role of data product management and the value of these things was, it’s like they’re looking for a recipe to follow.” — Brian T. O’Neill (07:17)

“My guess is people are just kind of reading up about it, self-training a bit, and trying to learn how to do product on their own. I think that’s how you learn how to do stuff is largely through trial and error. You can read books, you can do all that stuff, but beginning to do it is part of it.” — Brian T. O’Neill (08:57)

“I think the most important thing is that data is a raw ingredient here; it’s a foundation piece for the solution that we’re going to make that’s so good, someone might pay to use it or trade something of value to use it. And as long as that’s intact, I think you’re kind of checking the box as to whether it’s a data product.” — Brian T. O’Neill (12:13)

“I also would say on the data mesh topic, the feeling I got from people who had been to this conference before was that was quite a hyped thing the last couple years. Now, it was not talked about as much, but I think now they’re actually seeing some examples of this working.” — Brian T. O’Neill (16:25)

“My current v2 definition right now is, ‘A data product is a managed, end-to-end software solution that organizes, refines, or transforms data to solve a problem that’s so important customers would pay for it or exchange something of value to use it.’” — Brian T. O’Neill (19:47)

“We know [the product is] of value because someone was willing to pay for it or exchange their time or switch from their old way of doing things to the new way because it has that inherent benefit baked in. That’s really the most important part here that I think any data product manager should fully be aligned with.” — Brian T. O’Neill (21:35)

Links Episode 67 Episode 110 The Definition of Data Product The Data Product Leadership Community Ask me a question (below the recent episodes)

Today I’m answering a question that was submitted to the show by listener Will Angel, who asks how he can prioritize and scale effective discovery throughout the data product development process. Throughout this episode, I explain why discovery work is a process that should be taking place throughout the lifecycle of a project, rather than a defined period at the start of the project. I also emphasize the value of understanding the benefit users will see from the product as the main goal, and how to streamline the effectiveness of the discovery process. 

Highlights/ Skip to:

Brian introduces today’s topic, Discovery with Data Products, with a listener question (00:28) Why Brian sees discovery work as something that is ongoing throughout the lifecycle of a project (01:53) Brian tackles the first question of how to avoid getting killed by the process overhead of discovery and prioritization (03:38) Brian discusses his take on the question, “What are the ultimate business and user benefits that the beneficiaries hope to get from the product?”(06:02) The value Brian sees in stating anti-goals and anti-personas (07:47) How creative work is valuable despite the discomfort of not being execution-oriented (09:35) Why customer and stakeholder research activities need to be ongoing efforts (11:20) The two modes of design that Brian uses and their distinct purposes (15:09) Brian explains why a clear strategy is critical to proper prioritization (19:36) Why doing a few things really well usually beats out delivering a bunch of features and products that don’t get used (23:24) Brian on why saying “no” can be a gift when used correctly (27:18) How you can join the Data Product Leadership Community for more dialog like this and how to submit your own questions to the show (32:25)

Quotes from Today’s Episode “Discovery work, to me is something that largely happens up front at the beginning of a project, but it doesn’t end at the beginning of the project or product initiative, or whatever it is that you’re working on. Instead, I think discovery is a continual thing that’s going on all the time.” — Brian T. O’Neill (01:57)

“As tooling gets easier and easier and we need to stand up less infrastructure and basic pipelining in order to get from nothing to something, I think more of the work simply does become the discovery part of the work. And that is always going to feel somewhat inefficient because by definition it is.” — Brian T. O’Neill (04:48)

“Measuring [project management metrics] does not tell us whether or not the product is going to be valuable. It just tells us how fast are we writing the code and doing execution against something that may or may not actually have any value to the business at all.” — Brian T. O’Neill (07:33)

“How would you measure an improvement in the beneficiaries' lives? Because if you can improve their life in some way—and this often means me at work— the business value is likely to follow there.” — Brian T. O’Neill (18:42)

“Without a clear strategy, you’re not going to be able to do prioritization work efficiently because you don’t know what success looks like.” — Brian T. O’Neill (19:49)

“Doing a few things really well probably beats delivering a lot of stuff that doesn’t get used. There’s little point in a portfolio of data products that is really wide, but it’s very shallow in terms of value.” — Brian T. O’Neill (23:27)

“Anytime you’re going to be changing behavior or major workflows, the non-technical costs and work increase. And we have to figure out, ‘How are we going to market this and evangelize it and make people see the value of it?’ These types of behavior changes are really hard to implement and they need to be figured out during the design of the solution — not afterwards.” — Brian T. O’Neill (26:25)

Links designingforanalytics.com/podcast: https://designingforanalytics.com/podcast designingforanalytics.com/community: https://designingforanalytics.com/community

Brian T. O’Neill – host , Michelle Carney – UX Researcher @ Google

Michelle Carney began her career in the worlds of neuroscience and machine learning where she worked on the original Python Notebooks. As she fine-tuned ML models and started to notice discrepancies in the human experience of using these models, her interest turned towards UX. Michelle discusses how her work today as a UX researcher at Google impacts her work with teams leveraging ML in their applications. She explains how her interest in the crossover of ML and UX led her to start MLUX, a collection of meet-up events where professionals from both data science and design can connect and share methods and ideas. MLUX now hosts meet-ups in several locations as well as virtually. 

Our conversation begins with Michelle’s explanation of how she teaches data scientists to integrate UX into the development of their products. As a teacher, Michelle utilizes the IDEO Design Kit with her students at the Stanford School of Design (d.school). In her teaching she shares some of the unlearning that data scientists need to do when trying to approach their work with a UX perspective in her course, Designing Machine Learning.

Finally, we also discussed what UX designers need to know about designing for ML/AI. Michelle also talks about how model interpretability is a facet of UX design and why model accuracy isn’t always the most important element of a ML application. Michelle ends the conversation with an emphasis on the need for more interdisciplinary voices in the fields of ML and AI. 

Skip to a topic here:

Michelle talks about what drove her career shift from machine learning and neuroscience to user experience (1:15) Michelle explains what MLUX is (4:40) How to get ML teams on board with the importance of user experience (6:54) Michelle discusses the “unlearning” data scientists might have to do as they reconsider ML from a UX perspective (9:15) Brian and Michelle talk about the importance of considering the UX from the beginning of model development  (10:45) Michelle expounds on different ways to measure the effectiveness of user experience (15:10) Brian and Michelle talk about what is driving the increase in the need for designers on ML teams (19:59) Michelle explains the role of design around model interpretability and explainability (24:44)

Quotes from Today’s Episode “The first step to business value is the hurdle of adoption. A user has to be willing to try—and care—before you ever will get to business value.” - Brian O’Neill (13:01)

“There’s so much talk about business value and there’s very little talk about adoption. I think providing value to the end-user is the gateway to getting any business value. If you’re building anything that has a human in the loop that’s not fully automated, you can’t get to business value if you don’t get through the first gate of adoption.” - Brian O’Neill (13:17)

“I think that designers who are able to design for ambiguity are going to be the ones that tackle a lot of this AI and ML stuff.” - Michelle Carney (19:43)

“That’s something that we have to think about with our ML models. We’re coming into this user’s life where there’s a lot of other things going on and our model is not their top priority, so we should design it so that it fits into their ecosystem.” - Michelle Carney (3:27)

“If we aren’t thinking about privacy and ethics and explainability and usability from the beginning, then it’s not going to be embedded into our products. If we just treat usability of our ML models as a checkbox, then it just plays the role of a compliance function.” - Michelle Carney (11:52)

“I don’t think you need to know ML or machine learning in order to design for ML and machine learning. You don’t need to understand how to build a model, you need to understand what the model does. You need to understand what the inputs and the outputs are.” - Michelle Carney (18:45)

Links Twitter @mluxmeetup: https://twitter.com/mluxmeetup MLUX LinkedIn: https://www.linkedin.com/company/mlux/ MLUX YouTube channel: https://bit.ly/mluxyoutube Twitter @michelleRcarney: https://twitter.com/michelleRcarney IDEO Design Kit - https://tinyurl.com/2p984znh 

AI/ML Data Science Python
Brian T. O’Neill – host , Jonathan Kay – CEO and Co-Founder @ Apptopia

Building a SAAS business that focuses on building a research tool, more than building a data product, is how Jonathan Kay, CEO and Co-Founder of Apptopia frames his company’s work. Jonathan and I worked together when Apptopia pivoted from its prior business into a mobile intelligence platform for brands. Part of the reason I wanted to have Jonathan talk to you all is because I knew that he would strip away all the easy-to-see shine and varnish from their success and get really candid about what worked…and what hasn’t…during their journey to turn a data product into a successful SAAS business. So get ready: Jonathan is going to reveal the very curvy line that Apptopia has taken to get where they are today. 

In this episode, Jonathan also describes one of the core product design frameworks that Apptopia is currently using to help deliver actionable insights to their customers. For Jonathan, Apptopia’s research-centric approach changes the ways in which their customers can interact with data and is helping eliminate the lull between “the why” and “the actioning” with data.

Here are some of the key parts of  the interview:

An introduction to Apptopia and how they serve brands in the world of mobile app data (00:36) The current UX gaps that Apptopia is working to fill (03:32) How Apptopia balances flexibility with ease-of-use  (06:22) How Apptopia establishes the boundaries of its product when it’s just one part of a user’s overall workflow (10:06) The challenge of “low use, low trust” and getting “non-data” people to act (13:45) Developing strong conclusions and opinions and presenting them to customers (18:10) How Apptopia’s product design process has evolved when working with data, particularly at the UI level (21:30) The relationship between Apptopia’s buyer, versus the users of the product and how they balance the two (24:45) Jonathan’s advice for hiring good data product design and management staff (29:45) How data fits into Jonathan’s own decision making as CEO of Apptopia (33:21) Jonathan’s advice for emerging data product leaders (36:30)

Quotes from Today’s Episode  

“I want to just give you some props on the work that you guys have done and seeing where it's gone from when we worked together. The word grit, I think, is the word that I most associate with you and Eli [former CEO, co-founder] from those times. It felt very genuine that you believed in your mission and you had a long-term vision for it.” - Brian T. O’Neill (@rhythmspice) (02:08)

“A research tool gives you the ability to create an input, which might be, ‘I want to see how Netflix is performing.’ And then it gives you a bunch of data. And it gives you good user experience that allows you to look for the answer to the question that’s in your head, but you need to start with a question. You need to know how to manipulate the tool. It requires a huge amount of experience and understanding of the data consumer in order to actually get the answer to the question. For me, that feels like a miss because I think the amount of people who need and can benefit from data, and the amount of people who know how to instrument the tools to get the answers from the data—well, I think there’s a huge disconnect in those numbers. And just like when I take my car to get service, I expected the car mechanic knows exactly what the hell is going on in there, right? Like, our obligation as a data provider should be to help people get closer to the answer. And I think we still have some room to go in order to get there.” - Jonathan Kay (@JonathanCKay) (04:54)

“You need to present someone the what, the why, etc.—then the research component [of your data product] is valuable. And so it’s not that having a research tool isn’t valuable. It’s just, you can’t have the whole thing be that. You need to give them the what and the why first.” - Jonathan Kay (@JonathanCKay) (08:45) “You can't put equal resources into everything. Knowing the boundaries of your data product is important, but it's a hard thing to know sometimes where to draw those. A leader has to ask, ‘am I getting outside of my sweet spot? Is this outside of the mission?’ Figuring the right boundaries goes back to customer research.” - Brian T. O’Neill (@rhythmspice) (12:54)

“What would I have done differently if I was starting Apptopia today? I would have invested into the quality of the data earlier. I let the product design move me into the clouds a little bit, because sometimes you're designing a product and you're designing visuals, but we were doing it without real data. One of the biggest things that I've learned over a lot of mistakes over a long period of time, is that we've got to incorporate real data in the design process.” - Jonathan Kay (@JonathanCKay) (20:09)

“We work with one of the biggest food manufacturer distributors in the world, and they were choosing between us and our biggest competitor, and what they essentially did was [say] “I need to put this report together every two weeks. I used your competitor’s platform during a trial and your platform during the trial, and I was able to do it two hours faster in your platform, so I chose you—because all the other checkboxes were equal. However, at the end of the day, if we could get two hours a week back by using your tool, saving time and saving money and making better decisions, they’re all equal ROI contributors.” - Jonathan Kay on UX (@JonathanCKay) (27:23)

“In terms of our product design and management hires, we're typically looking for people who have not worked at one company for 10 years. We've actually found a couple phenomenal designers that went from running their own consulting company to wanting to join full time. That was kind of a big win because one of them had a huge breadth of experience working with a bunch of different products in a bunch of different spaces.”- Jonathan Kay (@JonathanCKay) (30:34)

“In terms of how I use data when making decisions for Apptopia, here’s an example. If you break our business down into different personas, my understanding one time was that one of our personas was more stagnant. The data however, did not support that. And so we're having a resource planning meeting, and I'm saying, ‘let's pull back resources a little bit,’ but [my team is] showing me data that says my assumption on that customer segment is actually incorrect. I think entrepreneurs and passionate people need data more because we have so much conviction in our decisions—and because of that,I'm more likely to make bad decisions. Theoretically good entrepreneurs should have good instincts, and you need to trust those, but what I’m saying is, you also need to check those. It's okay to make sure that your instinct is correct, right? And one of the ways that I’ve gotten more mature is by forcing people to show me data to either back up my decision in either direction and being comfortable being wrong. And I am wrong at least half of the time with those things!” - Jonathan Kay (@JonathanCKay) (34:09)

Analytics SaaS
Showing 16 results