talk-data.com talk-data.com

Topic

HTML

HyperText Markup Language (HTML)

web_development markup_language front_end

3

tagged

Activity Trend

15 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) ×

Today on the podcast, I interview AI researcher Tony Zhang about some of his recent findings about the effects that fully automated AI has on user decision-making. Tony shares lessons from his recent research study comparing typical recommendation AIs with a “forward-reasoning” approach that nudges users to contribute their own reasoning with process-oriented support that may lead to better outcomes. We’ll look at his two study examples where they provided an AI-enabled interface for pilots tasked with deciding mid-flight the next-best alternate airport to land at, and another scenario asking investors to rebalance an ETF portfolio. The takeaway, taken right from Tony’s research, is that “going forward, we suggest that process-oriented support can be an effective framework to inform the design of both 'traditional' AI-assisted decision-making tools but also GenAI-based tools for thought.” 

Highlights/ Skip to:

Tony Zhang’s background (0:46) Context for the study (4:12) Zhang’s metrics for measuring over-reliance on AI (5:06) Understanding the differences between the two design options that study participants were given  (15:39) How AI-enabled hints appeared for pilots in each version of the UI (17:49) Using AI to help pilots make good decisions faster (20:15) We look at the ETF portfolio rebalancing use case in the study  (27:46) Strategic and tactical findings that Tony took away from his study (30:47) The possibility of commercially viable recommendations based on Tony’s findings (35:40)  Closing thoughts (39:04)

Quotes from Today’s Episode

“I wanted to keep the difference between the [recommendation & forward reasoning versions] very minimal to isolate the effect of the recommendation coming in. So, if I showed you screenshots of those two versions, they would look very, very similar. The only difference that you would immediately see is that the recommendation version is showing numbers 1, 2, and 3 for the recommended airports. These [rankings] are not present in the forward-reasoning one [airports are default sorted nearest to furthest]. This actually is a pretty profound difference in terms of the interaction or the decision-making impact that the AI has. There is this normal flight mode and forward reasoning, so that pilots are already immersed in the system and thinking with the system during normal flight. It changes the process that they are going through while they are working with the AI.” Tony (18:50 - 19:42)

“You would imagine that giving the recommendation makes your decision faster, but actually, the recommendations were not faster than the forward-reasoning one. In the forward-reasoning one, during normal flight, pilots could already prepare and have a good overview of their surroundings, giving them time to adjust to the new situation. Now, in normal flight, they don’t know what might be happening, and then suddenly, a passenger emergency happens. While for the recommendation version, the AI just comes into the situation once you have the emergency, and then you need to do this backward reasoning that we talked about initially.” Tony ( 21:12 - 21:58)

“Imagine reviewing code written by other people. It’s always hard because you had no idea what was going on when it was written. That was the idea behind the forward reasoning. You need to look at how people are working and how you can insert AI in a way that it seamlessly fits and provides some benefit to you while keeping you in your usual thought process. So, the way that I see it is you need to identify where the key pain points actually are in your current decision-making process and try to address those instead of just trying to solve the task entirely for users.” Tony (25:40 - 26:19)

Links

LinkedIn: https://www.linkedin.com/in/zelun-tony-zhang/  Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making: https://arxiv.org/html/2504.03207v1 

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

In our chat, we covered:

Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)

Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc

Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)  

The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Today I’m chatting with Iván Herrero Bartolomé, Chief Data Officer at Grupo Intercorp. Iván describes how he was prompted to write his new article in CDO Magazine, “CDOs, Let’s Get Out of Our Comfort Zone” as he recognized the importance of driving cultural change within organizations in order to optimize the use of data. Listen in to find out how Iván is leveraging the role of the analytics translator to drive this cultural shift, as well as the challenges and benefits he sees data leaders encounter as they move from tactical to strategic objectives. Iván also reveals the number one piece of advice he’d give CDOs who are struggling with adoption. 

Highlights / Skip to:

Iván explains what prompted him to write his new article, “CDOs, Let’s Get Out of Our Comfort Zone” (01:08) What Iván feels is necessary for data leaders to close the gap between data and the rest of the business and why (03:44) Iván dives into who he feels really owns delivery of value when taking on new data science and analytics projects (09:50) How Iván’s team went from managing technical projects that often didn’t make it to production to working on strategic projects that almost always make it to production (13:06) The framework Iván has developed to upskill technical and business roles to be effective data / analytics translators (16:32) The challenge Iván sees data leaders face as they move from setting and measuring tactical goals to moving towards strategic goals and initiatives (24:12) Iván explains how the C-Suite’s attitude impacts the cross-functional role of data & analytics leadership (28:55) The number one piece of advice Iván would give new CDO’s struggling with low adoption of their data products and solutions (31:45)

Quotes from Today’s Episode “We’re going to do all our best to ensure that [...] everything that is expected from us is done in the best possible way. But that’s not going to be enough. We need a sponsorship and we need someone accountable for the project and someone who will be pushing and enabling the use of the solution once we are gone. Because we cannot stay forever in every company.” – Iván Herrero Bartolomé (10:52)

“We are trying to upskill people from the business to become data translators, but that’s going to take time. Especially what we try to do is to take product owners and give them a high-level immersion on the state-of-the-art and the possibilities that data analytics bring to the table. But as we can’t rely on our companies having this kind of talent and these data translators, they are one of the profiles that we bring in for every project that we work on.” – Iván Herrero Bartolomé (13:51)

“There’s a lot to do, not just between data and analytics and the other areas of the company, but aligning the incentives of all the organization towards the same goals in a way that there’s no friction between the goals of the different areas, the people, [...]  and the final goals of the organization. – Iván Herrero Bartolomé (23:13) “Deciding which goals are you going to be co-responsible for, I think that is a sophisticated process that it’s not mastered by many companies nowadays. That probably is one of the main blockers keeping data analytics areas working far from their business counterparts” – Iván Herrero Bartolomé (26:05)

“When the C-suite looks at data and analytics, if they think these are just technical skills, then the data analytics team are just going to behave as technical people. And many, many data analytics teams are set up as part of the IT organization. So, I think it all begins somehow with how the C-suite of our companies look at us.” – Iván Herrero Bartolomé (28:55) “For me, [digital] means much more than the technical development of solutions; it should also be part of the transformation of the company, both in how companies develop relationships with their customers, but also inside how every process in the companies becomes more nimble and can react faster to the changes in the market.” – Iván Herrero Bartolomé (30:49) “When you feel that everyone else not doing what you think they should be doing, think twice about whether it is they who are not doing what they should be doing or if it’s something that you are not doing properly.” – Iván Herrero Bartolomé (31:45)

Links “CDOs, Let’s Get Out of Our Comfort Zone”: https://www.cdomagazine.tech/cdo_magazine/topics/opinion/cdos-lets-get-out-of-our-comfort-zone/article_dce87fce-2479-11ed-a0f4-03b95765b4dc.html LinkedIn: https://www.linkedin.com/in/ivan-herrero-bartolome/