talk-data.com
People (2 results)
Activities & events
| Title & Speakers | Event |
|---|---|
|
183 - Part II: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change
2025-11-27 · 02:00
Brian T. O’Neill
– host
In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done. To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. Highlights/ Skip to: Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11) Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04) Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve. People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment. And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality. What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work. Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor. And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience. __ Want people to pay for your product? Solve an observable problem—not a vague information or data problem. What do I mean? “When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones. Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem. If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around. Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why. Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort. And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory." __ One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product changes—or you’re making improvements that might make the product worth paying for at all, worth paying more for, or easier to buy. "It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be. Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work. When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous. That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions. And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing." __ Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you. "People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself. Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal. Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them. That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses. Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted. And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours." __ Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel. There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.” Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short. "In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing. Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well. A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow. What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take. When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling. And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it." Links Listen to part one: Episode 182 Schedule a Design-Eyes Assessment with me and get clarity, now. |
|
|
182 - Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change
2025-11-10 · 08:00
Brian T. O’Neill
– host
Building B2B analytics and AI tools that people will actually pay for and use is hard. The reality is, your product won’t deliver ROI if no one’s using it. That’s why first principles thinking says you have to solve the usage problem first. In this episode, I’ll explain why the key to user adoption is designing with the flow of work—building your solution around the natural workflows of your users to minimize the behavior changes you’re asking them to make. When users clearly see the value in your product, it becomes easier to sell and removes many product-related blockers along the way. We’ll explore how product design impacts sales, the difference between buyers and users in enterprise contexts, and why challenging the “data/AI-first” mindset is essential. I’ll also share practical ways to align features with user needs, reduce friction, and drive long-term adoption and impact. If you’re ready to move beyond the dashboard and start building products that truly fit the way people work, this episode is for you. Highlights/Skip to: The core argument: why solving for user adoption first helps demonstrate ROI and facilitate sales in B2B analytics and AI products (1:34) How showing the value to actual end users—not just buyers—makes it easier to sell your product (2:33) Why designing for outcomes instead of outputs (dashboards, etc) leads to better adoption and long-term product value (8:16) How to “see” beyond users’ surface-level feature requests and solutions so you can solve for the actual, unspoken need—leading to an indispensable product (10:23) Reframing feature requests as design-actionable problems (12:07) Solving for unspoken needs vs. customer-requested features and functions (15:51) Why “disruption” is the wrong approach for product development (21:19) Quotes: “Customers’ tolerance for poorly designed B2B software has decreased significantly over the last decade. People now expect enterprise tools to function as smoothly and intuitively as the consumer apps they use every day. Clunky software that slows down workflows is no longer acceptable, regardless of the data it provides. If your product frustrates users or requires extra effort to achieve results, adoption will suffer. Even the most powerful AI or analytics engine cannot compensate for a confusing or poorly structured interface. Enterprises now demand experiences that are seamless, efficient, and aligned with real workflows. This shift means that product design is no longer a secondary consideration; it is critical to commercial success. Founders and product leaders must prioritize usability, clarity, and delight in every interaction. Software that is difficult to use increases the risk of churn, lengthens sales cycles, and diminishes perceived value. Products must anticipate user needs and deliver solutions that integrate naturally into existing workflows. The companies that succeed are the ones that treat user experience as a strategic differentiator. Ignoring this trend creates friction, frustration, and missed opportunities for adoption and revenue growth. Design quality is now inseparable from product value and market competitiveness. The message is clear: if you want your product to be adopted, retain customers, and win in the market, UX must be central to your strategy.” — “No user really wants to ‘check a dashboard’ or use a feature for its own sake. Dashboards, charts, and tables are outputs, not solutions. What users care about is completing their tasks, solving their problems, and achieving meaningful results. Designing around workflows rather than features ensures your product is indispensable. A workflow-first approach maps your solution to the actual tasks users perform in the real world. When we understand the jobs users need to accomplish, we can build products that deliver real value and remove friction. Focusing solely on features or data can create bloated products that users ignore or struggle to use. Outputs are meaningless if they do not fit into the context of a user’s work. The key is to translate user needs into actionable workflows and design every element to support those flows. This approach reduces cognitive load, improves adoption, and ensures the product's ROI is realized. It also allows you to anticipate challenges and design solutions that make workflows smoother, faster, and more efficient. By centering design on actual tasks rather than arbitrary metrics, your product becomes a tool users can’t imagine living without. Workflow-focused design directly ties to measurable outcomes for both end users and buyers. It shifts the conversation from features to value, making adoption, satisfaction, and revenue more predictable.” — “Just because a product is built with AI or powerful data capabilities doesn’t mean anyone will adopt it. Long-term value comes from designing solutions that users cannot live without. It’s about creating experiences that take people from frustration to satisfaction to delight. Products must fit into users’ natural workflows and improve their performance, efficiency, and outcomes. Buyers' perceived ROI is closely tied to meaningful adoption by end users. If users struggle, churn rises, and financial impact is diminished, regardless of technical sophistication. Designing for delight ensures that the product becomes a positive force in the user’s daily work. It strengthens engagement, reduces friction, and builds customer loyalty. High-quality UX allows the product to demonstrate value automatically, without constant explanations or hand-holding. Delightful experiences encourage advocacy, referrals, and easier future sales. The real power of design lies in aligning technical capabilities with human behavior and workflow. When done correctly, this approach transforms a tool into an indispensable part of the user’s job and a demonstrable asset for the business. Focusing on usability, satisfaction, and delight creates long-term adoption and retention, which is the ultimate measure of product success.” — “Your product should enter the user’s work stream like a raft on a river, moving in the same direction as their workflow. Users should not have to fight the current or stop their flow to use your tool. Introducing friction or requiring users to change their behavior increases risk, even if the product delivers ROI. The more naturally your product aligns with existing workflows, the easier it is to adopt and the more likely it is to be retained. Products that feel intuitive and effortless become indispensable, reducing conversations about usability during demos. By matching the flow of work, your solution improves satisfaction, accelerates adoption, and enhances perceived value. Disrupting workflows without careful observation can create new problems, frustrate users, and slow down sales. The goal is to move users from frustration to satisfaction to delight, all while achieving the intended outcomes. Designing with the flow of work ensures that every feature, interface element, and interaction fits seamlessly into the tasks users already perform. It allows users to focus on value instead of figuring out how to use the product. This alignment is key to unlocking adoption, retaining customers, and building long-term loyalty. Products that resist the natural workflow may demonstrate ROI on paper but fail in practice due to friction and low engagement. Success requires designing a product that supports the user’s journey downstream without interruption or extra effort. When you achieve this, adoption becomes easier, sales conversations smoother, and long-term retention higher.” — |
|
|
Data Engineering Meetup | Berlin, Oct 30th
2025-10-30 · 17:30
Let’s kick things off for another Meetup, this time focusing on the collaboration of data scientists and data engineers, as well as data streaming in the VW environment. Join us on October 30th in Berlin and bring all your questions! Tom Kaltofen: "What Data Scientists Actually Need from Data Engineers: A ‘Data Producer’ Perspective" Tom Kaltofen is an Engineer at DHL Data & AI and a Creator at mloda.ai. In his keynote, he'll explore how data engineers can better support data scientists, BI, software engineers, analysts and management by understanding their real needs and designing data products accordingly. He’ll share practical lessons from his own industry experience: what worked, what didn’t, and the trade-offs involved in real-world data workflows. Since data engineering often involves navigating competing approaches, we’ll also look at some of the pros and cons of different methods, but always with the different data user groups in mind. Alex Kalinnikov: "Event-driven data streaming platform at VW Group" Alex Kalinnikov is a Product Owner at CARIAD with over 10 years of experience in IT & Infrastructure. He will talk about how Cariad handles 180M telemetry messages per day with a modern data streaming architecture and how Cariad UDE Solution leverages Confluent Kafka, Apache Flink and Microsoft Azure to move terabytes of IoT data. What to expect:
Timetable:
More on the -> applydata data engineering meetup page. Our goal is to form a local data-loving community, so join us and let's talk data together! --- At the event, sound, image and video recordings are created and published for documentation purposes as well as for the presentation of the event in publicly accessible media, on websites and blogs and for presentation on social media. By participating the event, the participant implicitly consents to the aforementioned photo and/or video recordings. Find more information on data protection here. |
Data Engineering Meetup | Berlin, Oct 30th
|
|
181 - Lessons Learned Designing Orion, Gravity’s AI, AI Analyst Product with CEO Lucas Thelosen (former Head of Product @ Google Data & AI Cloud)
2025-10-28 · 22:11
Brian T. O’Neill
– host
,
Lucas Thelosen
– guest
@ Gravity
On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts. In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment. Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity. If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you! Highlights/ Skip to: Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future (48:35) Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar. Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below: Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas “Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas “Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian “Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas “Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian Links Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected] |
|
|
177 - Designing Effective Commercial AI Data Products for the Cold Chain with the CEO of Paxafe
2025-09-03 · 12:14
Brian T. O’Neill
– host
,
Ilya Preston
– co-founder and CEO
@ PAXAFE
In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions. Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation. Highlights/ Skip to: Intro to Paxafe (2:13) How PAXAFE brings tons of cold chain data together in one user experience (2:33) Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42) The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14) Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23) How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57) Who the end users of PAXAFE are, and how the product team designs for these users (20:00) Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43) Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57) Quotes from Today’s Episode "Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time." –IIya "As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions." -IIya "With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.." -IIya "We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything." -IIya "If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..” -IIya "Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems." -IIya Links PAXAFE: https://www.paxafe.com/ LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/ LinkedIn for company: https://www.linkedin.com/company/paxafe/ |
|
|
171 - Who Can Succeed in a Data or AI Product Management Role?
2025-06-10 · 10:00
Brian T. O’Neill
– host
Today, I’m responding to a listener's question about what it takes to succeed as a data or AI product manager, especially if you’re coming from roles like design/BI/data visualization, data science/engineering, or traditional software product management. This reader correctly observed that most of my content “seems more targeted at senior leadership” — and had asked if I could address this more IC-oriented topic on the show. I’ll break down why technical chops alone aren’t enough, and how user-centered thinking, business impact, and outcome-focused mindsets are key to real success — and where each of these prior roles brings strengths and/or weaknesses. I’ll also get into the evolving nature of PM roles in the age of AI, and what I think the super-powered AI product manager will look like. Highlights/ Skip to: Who can transition into an AI and data product management role? What does it take? (5:29) Software product managers moving into AI product management (10:05) Designers moving into data/AI product management (13:32) Moving into the AI PM role from the engineering side (21:47) Why the challenge of user adoption and trust is often the blocker to the business value (29:56) Designing change management into AI/data products as a skill (31:26) The challenge of value creation vs. delivery work — and how incentives are aligned for ICs (35:17) Quantifying the financial value of data and AI product work(40:23) Quotes from Today’s Episode “Who can transition into this type of role, and what is this role? I’m combining these two things. AI product management often seems closely tied to software companies that are primarily leveraging AI, or trying to, and therefore, they tend to utilize this AI product management role. I’m seeing less of that in internal data teams, where you tend to see data product management more, which, for me, feels like an umbrella term that may include traditional analytics work, data platforms, and often AI and machine learning. I’m going to frame this more in the AI space, primarily because I think AI tends to capture the end-to-end product than data product management does more frequently.” — Brian (2:55) “There are three disciplines I’m going to talk about moving into this role. Coming into AI and data PM from design and UX, coming into it from data engineering (or just broadly technical spaces), and then coming into it from software product management. I think software product management and moving into the AI product management - as long as you’re not someone that has two years of experience, and then 18 years of repeating the second year of experience over and over again - and you’ve had a robust product management background across some different types of products; you can show that the domain doesn’t necessarily stop you from producing value. I think you will have the easiest time moving into AI product management because you’ve shown that you can adapt across different industries.” - Brian (9:45) “Let’s talk about designers next. I’m going to include data visualization, user experience research, user experience design, product design, all those types of broad design, category roles. Moving into data and/or AI product management, first of all, you don’t see too many—I don’t hear about too many designers wanting to move into DPM roles, because oftentimes I don’t think there’s a lot of heavy UI and UX all the time in that space. Or at least the teams that are doing that work feel that’s somebody else’s job because they’re not doing end-to-end product thinking the way I talk about it, so therefore, a lot of times they don’t see the application, the user experience, the human adoption, the change management, they’re just not looking at the world that way, even though I think they should be.” - Brian (13:32) “Coming at this from the data and engineering side, this is the classic track for data product management. At least that is the way I tend to see it. I believe most companies prefer to develop this role in-house. My biggest concern is that you end up with job title changes, but not necessarily the benefits that are supposed to come with this. I do like learning by doing, but having a coach and someone senior who can coach your other PMs is important because there’s a lot of information that you won’t necessarily get in a class or a course. It’s going to come from experience doing the work.” - Brian (22:26) “This value piece is the most important thing, and I want to focus on that. This is something I frequently discuss in my training seminar: how do we attach financial value to the work we’re doing? This is both art and science, but it’s a language that anyone in a product management role needs to be comfortable with. If you’re finding it very hard to figure out how your data product contributes financial value because it’s based on this waterfalling of “We own the model, and it’s deployed on a platform.” The platform then powers these other things, which in turn power an application. How do we determine the value of our tool? These things are challenging, and if it’s challenging for you, guess how hard it will be for stakeholders downstream if you haven’t had the practice and the skills required to understand how to estimate value, both before we build something as well as after?” - Brian (31:51) “If you don’t want to spend your time getting to know how your business makes money or creates value, then [AI and data product management work] is not for you. It’s just not. I would stay doing what you’re doing already or find a different thing because a lot of your time is going to be spent “managing up” for half the time, and then managing the product stuff “down.” Then, sitting in this middle layer, trying to explain to the business what’s going to come out and what the impact is going to be, in language that they care about and understand. You can't be talking about models, model accuracy, data pipelines, and all that stuff. They’re not going to care about any of that. - Brian (34:08) |
|
|
167 - AI Product Management and Design: How Natalia Andreyeva and Team at Infor Nexus Create B2B Data Products that Customers Value
2025-04-16 · 11:59
Brian T. O’Neill
– host
,
Natalia Andreyeva
– Senior Director of Product Management
@ Infor
Today, I’m talking with Natalia Andreyeva from Infor about AI / ML product management and its application to supply chain software. Natalia is a Senior Director of Product Management for the Nexus AI / ML Solution Portfolio, and she walks us through what is new, and what is not, about designing AI capabilities in B2B software. We also got into why user experience is so critical in data-driven products, and the role of design in ensuring AI produces value. During our chat, Natalia hit on the importance of really nailing down customer needs through solid discovery and the role of product leaders in this non-technical work. We also tackled some of the trickier aspects of designing for GenAI, digital assistants, the need to keep efforts strongly grounded in value creation for customers, and how even the best ML-based predictive analytics need to consider UX and the amount of evidence that customers need to believe the recommendations. During this episode, Natalia emphasizes a huge key to her work’s success: keeping customers and users in the loop throughout the product development lifecycle. Highlights/ Skip to What Natalia does as a Senior Director of Product Management for Infor Nexus (1:13) Who are the people using Infor Nexus Products and what do they accomplish when using them (2:51) Breaking down who makes up Natalia's team (4:05) What role does AI play in Natalia's work? (5:32) How do designers work with Natalia's team? (7:17) The problem that had Natalia rethink the discovery process when working with AI and machine learning applications (10:28) Why Natalia isn’t worried about competitors catching up to her team's design work (14:24) How Natalia works with Infor Nexus customers to help them understand the solutions her team is building (23:07) The biggest challenges Natalia faces with building GenAI and machine learning products (27:25) Natalia’s four steps to success in building AI products and capabilities (34:53) Where you can find more from Natalia (36:49) Quotes from Today’s Episode “I always launch discovery with customers, in the presence of the UX specialist [our designer]. We do the interviews together, and [regardless of who is facilitating] the goal is to understand the pain points of our customers by listening to how they do their jobs today. We do a series of these interviews and we distill them into the customer needs; the problems we need to really address for the customers. And then we start thinking about how to [address these needs]. Data products are a particular challenge because it’s not always that you can easily create a UX that would allow users to realize the value they’re searching for from the solution. And even if we can deliver it, consuming that is typically a challenge, too. So, this is where [design becomes really important]. [...] What I found through the years of experience is that it’s very difficult to explain to people around you what it is that you’re building when you’re dealing with a data-driven product. Is it a dashboard? Is it a workboard? They understand the word data, but that’s not what we are creating. We are creating the actual experience for the outcome that data will deliver to them indirectly, right? So, that’s typically how we work.” - Natalia Andreyeva (7:47) “[When doing discovery for products without AI], we already have ideas for what we want to get out. We know that there is a space in the market for those solutions to come to life. We just have to understand where. For AI-driven products, it’s not only about [the user’s] understanding of the problem or the design, it is also about understanding if the data exists and if it’s feasible to build the solution to address [the user’s] problem. [Data] feasibility is an extremely important piece because it will drive the UX as well.” - Natalia Andreyeva (10:50) “When [the team] discussed the problem, it sounded like a simple calculation that needed to be created [for users]. In reality, it was an entire process of thinking of multiple people in the chain [of command] to understand whether or not a medical product was safe to be consumed. That’s the outcome we needed to produce, and when we finally did, we actually celebrated with our customers and with our designers. It was one of the most difficult things that we had to design. So why did this problem actually get solved, and why we were the ones who solved it? It’s because we took the time to understand the current user experience through [our customer] interviews. We connected the dots and translated it all into a visual solution. We would never be able to do that without the proper UX and design in that place for the data.” - Natalia Andreyeva (13:16) “Everybody is pressured to come up with a strategy [for AI] or explain how AI is being incorporated into their solutions and platform, but it is still essential for all of my peers in product management to focus on the value [we’re] creating for customers. You cannot bypass discovery. Discovery is the essential portion where you have to spend time with your customers, champions, advisors, and their leads, but especially users who are doing this [supply chain] job every single day—so we understand where the pain point really is for them, we solve that pain, and we solve it with our design team as a partner, so that solution can surface value. ” - Natalia Andreyeva (22:08) “GenAI is a new field and new technology. It’s evolving quickly, and nobody really knows how to properly adapt or drive the adoption of AI solutions. The speed of innovation [in the AI field] is a challenge for everybody. People who work on the frontlines (i.e. product, engineering teams), have to stay way ahead of the market. Meanwhile, customers who are going to be using these [AI] solutions are not going to trust the [initial] outcomes. It’s going to take some time for people to become comfortable with them. But it doesn’t mean that your solution is bad or didn’t find the market fit. It’s just not time for your [solution] yet. Educating our users on the value of the solution is also part of that challenge, and [designers] have to be very careful that solutions are accessible. Users do not adopt intimidating solutions.” - Natalia Andreyeva (27:41) “First, discovery—where we search for the problems. From my experience, [discovery] works better if you’re very structured. I always provide [a customer] with an outline of what needs to happen so it’s not a secret. Then, do the prototyping phase and keep the customer engaged so they can see the quick outcomes of those prototypes. This is where you also have to really include the feasibility of the data if you’re building an AI solution, right? [Prototyping] can be short or long, but you need to keep the customer engaged throughout that phase so they see quick outcomes. Keep on validating this conceptually, you know, on the napkin, in Figma, it doesn’t really matter; you have to keep on keeping them engaged. Then, once you validate it works and the customer likes it, then build. Don’t really go into the deep development work until you know [all of this!] When you do build, create a beta solution. It only has to work so much to prove the value. Then, run the pilot, and if it’s successful, build the MVP, then launch. It’s simple, but it is a lot of work, and you have to keep your customers really engaged through all of those phases. If something doesn’t work [along the way], try to pivot early enough so you still have a viable product at the end.” - Natalia Andreyeva (34:53) Links Natalia's LinkedIn |
|
|
165 - How to Accommodate Multiple User Types and Needs in B2B Analytics and AI Products When You Lack UX Resources
2025-03-18 · 21:26
Brian T. O’Neill
– host
A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone! In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I’m hoping by the end of this episode, you’ll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands. Highlights/ Skip to The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value (4:29) What “level” of skill to design for, and why “as simple as possible” isn’t what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14) Quotes from Today’s Episode If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you’re going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product. While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it’s why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user’s questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”) designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you’re going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it’s possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19) |
|
|
164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge
2025-03-04 · 23:29
Brian T. O’Neill
– host
Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products. While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow. I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into? If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users, this episode will help! Highlights/ Skip to Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02) AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35) Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55) Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26) AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14) Not overheard from customers: “I would buy this/use this if it had AI” (26:52) Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20) Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56) Quotes from Today’s Episode Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25) When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work? Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51) Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49) What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly. Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21) |
|
|
Introducing Host Amritha Arun Babu
2025-02-21 · 12:00
Amritha Arun Babu
– Product Leader
@ Klaviyo
The Data Product Management In Action podcast, brought to you by executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In this episode of Data Product Management in Action, we introduce our new host, Amritha Arun Babu! With over eight years of experience, Amritha shares her transition from engineering to product management and her impressive journey. She discusses scaling Amazon Today’s same-day delivery program, building code-to-cash products, and her current role at Klaviyo, where she’s shaping AI-driven features and refining ML platforms. Amritha emphasizes the value of understanding user needs, designing secure, scalable systems, and overcoming cross-functional challenges. Her advice to fellow product managers: network, share your experiences, and enjoy the ride! About our Host Amritha Arun Babu: Amritha is an accomplished Product Leader with over a decade of experience building and scaling products across AI platforms, supply chain systems, and enterprise workflows in industries such as e-commerce, AI/ML, and marketing automation. At Amazon, she led machine learning platform products powering recommendation and personalization engines, building tools for model experimentation, deployment, and monitoring that improved efficiency for 1,500+ ML scientists. At Wayfair, she managed international supply chain systems, overseeing contracts, billing, product catalogs, and vendor operations, delivering cost savings and optimizing large-scale workflows. At Klaviyo, she drives both AI infrastructure and customer-facing AI tools, including recommendation engines, content generation assistants, and workflow automation agents, enabling scalable and personalized marketing workflows. Earlier, she worked on enterprise systems and revenue operations workflows, focusing on cost optimization and process improvements in complex technical environments. Amritha excels at bridging technical depth with strategic clarity, leading cross-functional teams, and delivering measurable business outcomes across diverse domains. Connect with Amritha on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else. Join the conversation on LinkedIn. Apply to be a guest or nominate someone that you know. Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights! |
Data Product Management in Action: The Practitioner's Podcast |
|
162 - Beyond UI: Designing User Experiences for LLM and GenAI-Based Products
2025-02-04 · 05:30
Simon Landry
– guest
@ Thomson Reuters
,
Brian T. O’Neill
– host
,
Paz Perez
– guest
@ Google
,
Greg Nudelman
– guest
@ Sumo Logic
I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users. With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products. Highlights/ Skip to Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51) The problems that can come with the "AI-first" design philosophy (7:58) Should a company's design resources be spent on go toward AI development? (17:20) How designers can navigate "fuzzy experiences” (21:28) Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35) Why diversity matters in your design and research teams when building LLMs (31:56) Where you can find more from Paz, Greg, and Simon (40:43) Quotes from Today’s Episode “ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47) “ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41) “ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman “ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39) “ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19) Links Milly Barker’s LinkedIn post Greg Nudelman’s Value Matrix Article Greg Nudelman website Paz Perez on Medium Paz Perez on LinkedIn Simon Landry LinkedIn |
|
|
161 - Designing and Selling Enterprise AI Products [Worth Paying For]
2025-01-21 · 05:30
Brian T. O’Neill
– host
With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable? What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything? Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you? Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology. These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts. Highlights/ Skip to: AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03) Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44) Selling AI Products to Customers Who Aren’t Users (13:28) How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10) Probabilistic UIs and LLMs Don’t Negate the Need to Design for Outcomes (22:48) How AI Changes (or Doesn’t) Our Benchmark Use Cases and UX Outcomes (28:41) Closing Thoughts (32:36) Quotes from Today’s Episode “Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51) “Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you’re building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18) “Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.’ ‘Add it so you can sell it.’ As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer’s experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It’s hard to make the product indispensable if it’s trying to do 100 things for 100 people.“ (22:48) “Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don’t understand this, and they think if they just nail design and UX, then everything else solves itself. The reason the design and experience part is hard is that it’s tied to behavior change– especially if you are ‘disrupting’ an industry, incumbent tool, application, or product. You are in the behavior-change game, and it’s really hard to get it right. But when you get it right, it can be really amazing and transformative.” (28:01) “If your AI product is trying to do a wide variety of things for a wide variety of personas, it’s going to be harder to determine appropriate benchmarks and UX outcomes to measure and design against. Given LLM hallucinations, the increased problem of trust, model drift problems, etc., your AI product has to actually innovate in a way that is both meaningful and observable to the customer. It doesn’t matter what your AI is trying to “fix.” If they can’t see what the benefit is to them personally, it doesn’t really matter if technically you’ve done something in a new and novel way. They’re just not going to care because that question of what’s in it for me is always sitting behind, in their brain, whether it’s stated out loud or not.” (29:32) Links Designing for Analytics mailing list |
|
|
157 - How this materials science SAAS company brings PM+UX+data science together to help materials scientists accelerate R&D
2024-11-26 · 05:30
Brian T. O’Neill
– host
,
Ori Yudilevich
– Chief Product Officer
@ MaterialsZone
R&D for materials-based products can be expensive, because improving a product’s materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the lab—away from a computer screen—so how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users? As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts. We covered: (0:45) Explaining what Ori does at MaterialZone and who their product serves (2:28) How Ori and his team help make material science testing more efficient through their SAAS product (9:37) How they design a UX that can work across various scientific domains (14:08) How “doing product” at MaterialsZone matured over the past five years (17:01) Explaining the "Wizard of Oz" product development technique (21:09) The importance of integrating UX designers into the "Wizard of Oz" (23:52) The challenges MaterialZone faces when trying to get users to adopt to their product (32:42) Advice Ori would've given himself five years ago (33:53) Where you can find more from MaterialsZone and Ori Quotes from Today’s Episode “The fascinating thing about materials science is that you have this variety of domains, but all of these things follow the same process. One of the problems [consumer goods companies] face is that they have to do lengthy testing of their products. This is something you can use machine learning to shorten. [Product research] is an iterative process that typically takes a long time. Using your data effectively and using machine learning to predict what can happen, what’s better to try out, and what will reduce costs can accelerate time to market.” - Ori Yudilevich (3:47) “The difference [in time spent testing a product] can be up to 70% [i.e. you can run 70% fewer experiments using ML.] That [also] means 70% less resources you’re using. Under the ‘old system’ of trial and error, you were just trying out a lot of things. The human mind cannot process a large number of parameters at once, so [a materials scientist] would just start playing only with [one parameter at a time]. You’ll have many experiments where you just try to optimize [for] one parameter, but then you might have 20, 30, or 100 more [to test]. Using machine learning, you can change a lot of parameters at once. The model can learn what has the most effect, what has a positive effect, and what has a negative effect. The differences can be really huge.” - Ori Yudilevich (5:50) “Once you go deeper into a use case, you see that there are a lot of differences. The types of raw materials, the data structure, the quantity of data, etc. For example, with batteries, you have lots of data because you can test hundreds all at once. Whereas with something like ceramics, you don’t try so many [experiments]. You just can’t. It’s much slower. You can’t do so many [experiments] in parallel. You have much less data. Your models are different, and your data structure is different. But there’s also quite a lot of commonality because you’re storing the data. In the end, you have each domain, some raw materials, formulations, tests that you’re doing, and different statistical plots that are very common.” - Ori Yudilvech (11:24) “We’ll typically do what we call the ‘Wizard of Oz’ technique. You simulate as if you have a feature, but you’re actually working for your client behind the scenes. You tell them [the simulated feature] is what you’re doing, but then measure [the client’s response] to understand if there’s any point in further developing that feature. Once you validate it, have enough data, and know where the feature is going, then you’ll start designing it and releasing it in incremental stages. We’ve made a lot of progress in how we discover opportunities and how we build something iteratively to make sure that we’re always going in the right direction” - Ori Yudilevich (15:56) “The main problem we’re encountering is changing the mindset of users. Our users are not people who sit in front of a computer. These are researchers who work in [a materials science] lab. The challenge [we have] is getting people to use the platform more. To see it’s worth [their time] to look at some insights, and run the machine learning models. We’re always looking for ways to make that transition faster… and I think the key is making [the user experience] just fun, easy, and intuitive.” - Ori Yudilevich (24:17) “Even if you make [the user experience] extremely smooth, if [users] don’t see what they get out of it, they’re still not going to [adopt your product] just for the sake of doing it. What we find is if this [product] can actually make them work faster or develop better products– that gets them interested. If you’re adopting these advanced tools, it makes you a better researcher and worker. People who [adopt those tools] grow faster. They become leaders in their team, and they slowly drag the others in.” - Ori Yudilevich (26:55) “Some of [MaterialsZone’s] most valuable employees are the people who have been users. Our product manager is a materials scientist. I’m not a material scientist, and it’s hard to imagine being that person in the lab. What I think is correct turns out to be completely wrong because I just don’t know what it’s like. Having [material scientists] who’ve made the transition to software and data science? You can’t replace that.” - Ori Yudilevich (31:32) Links Referenced Website: https://www.materials.zone LinkedIn: https://www.linkedin.com/in/oriyudilevich/ Email: [email protected] |
|
|
155 - Understanding Human Engagement Risk When Designing AI and GenAI User Experiences
2024-10-29 · 04:30
Brian T. O’Neill
– host
,
Ovetta Sampson
– guest
@ Google
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences. While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward. Highlights/ Skip to: (1:48) Ovetta's background and what she brings to Google’s Core ML group (6:03) How Ovetta and her team work with data scientists and engineers deep in the stack (9:09) How AI is changing the front-end of applications (12:46) The type of people you should seek out to design your AI and LLM UXs (16:15) Explaining why we’re only at the very start of major GenAI breakthroughs (22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams (31:11) The potential harms of carelessly deploying GenAI technology (42:09) Defining acceptable levels of risk when using GenAI in real-world applications (53:16) Closing thoughts from Ovetta and where you can find her Quotes from Today’s Episode “If artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. That’s why companies have strong legal, compliance, and regulatory policies around [AI], how it’s built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.” - Ovetta Sampson (10:03) “[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. That’s the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as they’re testing product design.” - Ovetta Sampson (13:45) “We’re in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. It’s changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and you’re like, all right, let’s go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?” - Ovetta Sampson (24:16) “We have to be very intentional [when building AI tools], and that’s the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be “Negative Nancy,” but this technology has huge potential to harm. It has harmed. If we don’t have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If we’re not looking out for the humans, then who actually is?” - Ovetta Sampson (32:10) “[Research shows] things happen to our brain when we’re exposed to artificial intelligence… there are real human engagement risks that are an opportunity for design. When you’re designing a self-driving car, you can’t just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop system—something that’s going to happen [in case] the algorithm is wrong. If you don’t have that designed, there’s going to be a large human engagement risk that a car is going to run over somebody who’s [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?” - Ovetta Sampson (39:42) “Model goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. That’s good, right? And then people started seeing photos of apes when [they] typed in ‘black people.’ Companies have to get used to going to their customers in a wide spectrum and asking them when they’re [models or apps are] right and wrong. They can’t take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If you’re going to use this technology, you need to put into place the governance that needs to be there.” - Ovetta Sampson (44:08) |
|
|
152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product
2024-09-17 · 17:58
Brian T. O’Neill
– host
In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-) Ready? Highlights/ Skip to: (1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations (30:09) Designing after the ML models have been trained—and it’s too late to go back (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33) “Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?” (47:26) |
|
|
LSUG Formal Meetp - 10 September 2024
2024-09-10 · 17:00
Meet us at Kubrick Group HQ for another formal user group! Tony Burton, Head of Data Engineering at Sporting Solutions, will take us through their Snowflake story so far - how they onboarded, what the main appeal was for their complex data, how they’ve adopted Snowflake features along the way, how they’ve managed costs, what they’ve done with the data once it’s there, how it’s driven sales and revenue. Then, our event host, Euan Newlands, Data Engineer at Kubrick Group, will get us up to speed on Snowpark Pandas! Agenda 6:00 PM: Welcome 6:30 PM: Sporting Solutions 7:00 PM: Snowpark Pandas 7:30 PM: Closing Items Speakers Tony Burton - Sporting Solutions (Head of Data Engineering) Euan Newlands - Kubrick Group (Data Engineer) Hosts Peter Aubrey - Snowflake (Senior Sales Engineer) Experienced Director Solutions Consulting with a demonstrated history of working in the computer software industry. Strong consulting professional skilled in Business Process, Enterprise Software, PaaS, Partner Management, and Agile Methodologies. Piers Batchelor - Astrato Analytics (Sr. Product Manager) Piers Batchelor is an award-winning data expert and product strategist, designing innovative solutions for real-world business. Piers' experience covers a number of industries, and he has architected numerous cloud BI products, modernising traditional data visualisation, data storytelling concepts and re-imagined exploratory analysis with AI. He is part of the 2023 Snowflake Data Superhero coh… Partner Kubrick Group (https://www.kubrickgroup.com/uk/) When it comes to realising value from tech, organizations have a problem: either struggle to build a team with scarce talent or fork out on consultants that won’t work for them. Literally. But there is a better way. Kubrick introduces Next-Gen Consulting. We help organisations accelerate delivery and build their teams, driving product adoption that lasts. Integrate our consultants into your team - and keep them there. Snowflake User GroupsComplete your event RSVP here: https://usergroups.snowflake.com/events/details/snowflake-london-presents-lsug-formal-meetp-10-september-2024/. |
LSUG Formal Meetp - 10 September 2024
|
|
Brian T. O’Neill
– host
Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks. I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.” Highlights/ Skip to: (1:15) Currently, many LLM feature initiatives seem to mostly driven by FOMO (2:45) UX Considerations for LLM-enhanced enterprise applications (5:14) Challenges with LLM UIs / user interfaces (7:24) Measuring improvement in UX outcomes with LLMs (10:36) Accuracy in LLMs and its relevance in enterprise software (11:28) Illustrating key consideration for implementing an LLM-based feature (19:00) Leadership and context in AI deployment (19:27) Determining UX benchmarks for using LLMs (20:14) The dynamic nature of LLM hallucinations and how we design for the unknown (21:16) Closing thoughts on Part 1 of designing for AI and LLMs Quotes from Today’s Episode “While many product teams continue to race to deploy some sort of GenAI and especially LLMs into their products—particularly this is in the tech sector for commercial software companies—the general sense I’m getting is that this is still more about FOMO than anything else.” - Brian T. O’Neill (2:07) “No matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.” - Brian T. O’Neill (3:03) “So, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while we’ve given users tremendous flexibility in the form of a Google search-like interface, we’ve also in many cases, limited the UX of these interactions to a text conversation with a machine. We’re back to the CLI in some ways.” - Brian T. O’Neill (5:14) “Before and after we insert an LLM into a user’s workflow, we need to know what an improvement in their life or work actually means.”- Brian T. O’Neill (7:24) "If it would take the machine a few seconds to process a result versus what might take a day for a worker, what’s the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if you’re concerned about adoption, which a lot of data product leaders are." - Brian T. O’Neill (10:17) “So, there’s no right or wrong answer here. These are all range questions, and they’re leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much of the time.” - Brian T. O’Neill (19:00) Links View Part 1 of my article on UI/UX design considerations for LLMs in enterprise applications: https://designingforanalytics.com/resources/ui-ux-design-for-enterprise-llms-use-cases-and-considerations-for-data-and-product-leaders-in-2024-part-1/ |
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) |
|
Richie
– host
@ DataCamp
,
Robb Wilson
– Co-Founder & CEO
@ Onereach.ai
All the hype around generative AI means that every software maker seems to be stuffing chat interfaces into their products whenever they can. For the most part, the jury is still out on whether this is a good idea or not. However, design goes deeper than just the user interface, so it’s also useful to know about how the designs interact with the rest of the software. Once you move beyond chatbots into things like agents, there are also thorny questions around which bits of your workflow should still be done by a human, and which bits can be completely automated. True insight in this context lies in a gray area, across software, UX and AI. Robb is an AI researcher, technologist, designer, innovator, serial entrepreneur, and author. He is a contributor to Harvard Business Review and the visionary behind, OneReach.ai, the award winning conversational artificial intelligence platform that ranked highest in Gartner's Critical Capabilities Report for Enterprise Conversational AI Platforms. He earned an Academy Award nomination for technical achievement as well as over 130 innovation, design, technology, and artificial intelligence awards, with five in 2019 including AI Company of the Year and Hot AI Technology of the Year. Robb is a pioneer in the user research and technology spaces. He founded EffectiveUI, a user experience and technology research consultancy for the Fortune 500, which was acquired by WPP and integrated into the core of Ogilvy’s digital experience practice. He also created UX Magazine, one of the first and largest XD (experience design) thought leadership communities. In the episode, Richie and Robb explore chat interfaces in software, the advantages of chat interfaces over other methods of interaction with data & AI products, geospatial vs language memory, good vs bad chat interfaces, the importance of a human in the loop, personality in chatbots, handling hallucinations and bad responses, scaling chatbots, agents vs chatbots, ethical considerations for AI and chatbots and much more. Links Mentioned in the Show: Onereach.aiInvisible Machines PodcastGartner: The Executive Guide to Hyperautomation[Skill Track] Developing AI ApplicationsRelated Episode: Building Human-Centered AI Experiences with Haris Butt, Head of Product Design at ClickUpSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business |
DataFramed |
|
LSUG Formal
2024-06-11 · 17:00
Join us on Tuesday 11th June at Projective Group HQ for a formal user group meeting. We'll begin with a drinks reception on the balcony, overlooking the City of London and the Thames, followed by a presentation on how one financial services company made the wholehearted move onto Snowflake, and a catch up on everything announced at Snow Summit. Agenda 6:00 PM: Balcony Drinks Reception Enjoy a free bar and catering on Projective Group HQ's balcony overlooking the Thames and the City of London 7:00 PM: How to Switch Your Bank to Snowflake Banks are notoriously big organisations and setting up a new tech stack is no easy thing. Hear how Terry Catt implemented Snowflake at a large bank and began the move from their legacy data warehouse. 7:30 PM: Summit Catch Up Will Riley from Snowflake Professional Services will be with us to tell us about all the excitement of Summit. Hopefully the sun will make us feel like we're still in California. Speakers Will Riley - Snowflake (EMEA Solutions Architect) Terry Catt - Projective Group (Senior Data Engineer) Hosts Christopher Marland Piers Batchelor - Astrato Analytics (Sr. Product Manager) Piers Batchelor is an award-winning data expert and product strategist, designing innovative solutions for real-world business. Piers' experience covers a number of industries, and he has architected numerous cloud BI products, modernising traditional data visualisation, data storytelling concepts and re-imagined exploratory analysis with AI. He is part of the 2023 Snowflake Data Superhero coh… Partner PROJECTIVE GROUP UK (http://www.projectivegroup.com/) Established in 2006, Projective Group is a leading Financial Services change specialist, with expertise across dedicated divisions in Data (formerly DTSQUARED), Payments, Risk and Compliance, Transformation, Managed Services and Talent. Snowflake User GroupsComplete your event RSVP here: https://usergroups.snowflake.com/events/details/snowflake-london-presents-lsug-formal/. |
LSUG Formal
|
|
Taking Over Tech with Women+ in Data & AI
2024-06-06 · 16:30
Our Taking Over Tech event is back and we are very excited to team up with the amazing Women+ in Data & AI community! At Taking Over Tech events female tech experts take the stage to share their expertise in very specific tech areas. No fluff, pure tech and full female badassery! Agenda: 6:30pm - Arrival, drinks, food & chit-chat 7:00pm - Talk 1: Breaking Free from Stalkerware: Combating Abusive Tech and Protecting Your Data by Anna Lagutina (Senior Consultant at Thoughtworks) 7:35pm - 10 min break 7:45pm - Talk 2: Designing the user experience of AI products by Janna Lipenkova (CEO & Co-Founder Equintel) 8:20pm - Networking Breaking Free from Stalkerware: Combating Abusive Tech and Protecting Your Data by Anna Lagutina In today's digital world, protecting your personal information and maintaining your privacy is more important than ever. Join us for a comprehensive talk on stalkerware, a growing threat that can compromise your safety and invade your privacy. Learn how to recognize signs of stalkerware on your devices, the potential risks to your data, and practical strategies to protect yourself. Discover best practices to keep your information secure and empower yourself with the knowledge to stay safe in the digital age. Designing the user experience of AI products by Dr. Janna Lipenkova Artificial Intelligence is transforming the way we develop and use software. It takes us from a world of deterministic interfaces toward more flexible, probabilistic interactions - and while these are often more powerful, they also require more judgement, critical thinking, and know-how from users. In this talk, I will talk about state-of-the-art best practices and design patterns in AI UX design. We will cover topics such as how to manage failures, guide users through AI interfaces, and communicate the capabilities, but - crucially - also the limitations of an AI product or feature. The talk is based on learnings and insights that I synthesized while driving the discovery and development efforts of numerous AI projects. Sign up now! We are looking forward to you :) |
Taking Over Tech with Women+ in Data & AI
|