talk-data.com talk-data.com

Event

Data Skeptic

2014-05-23 – 2025-11-23 Podcasts Visit website ↗

Activities tracked

394

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

Sessions & talks

Showing 76–100 of 394 · Newest first

Search within this event →

Robust Fit to Nature

2020-06-12 Listen
podcast_episode

Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

Black Boxes Are Not Required

2020-06-05 Listen
podcast_episode

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes". While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving "usefulness" require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don't Need To? A Lesson From An Explainable AI Competition

Robustness to Unforeseen Adversarial Attacks

2020-05-30 Listen
podcast_episode

Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

Estimating the Size of Language Acquisition

2020-05-22 Listen
podcast_episode

Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition

Interpretable AI in Healthcare

2020-05-15 Listen
podcast_episode

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

Understanding Neural Networks

2020-05-08 Listen
podcast_episode

What does it mean to understand a neural network? That's the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

Self-Explaining AI

2020-05-02 Listen
podcast_episode

Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user. We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.

Plastic Bag Bans

2020-04-24 Listen
podcast_episode

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

Self Driving Cars and Pedestrians

2020-04-18 Listen
podcast_episode

We are joined by Arash Kalatian to discuss Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning.

Computer Vision is Not Perfect

2020-04-10 Listen
podcast_episode
Kyle Polich , Julia Evans (Wizard Zines)

Computer Vision is not Perfect Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks. Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk

Uncertainty Representations

2020-04-04 Listen
podcast_episode
Kyle Polich , Jessica Hullman (Northwestern University)

Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica's work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates. Homepage: http://users.eecs.northwestern.edu/~jhullman/ Lab: MU Collective

AlphaGo, COVID-19 Contact Tracing and New Data Set

2020-03-28 Listen
podcast_episode

Announcing Journal Club I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to. In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article. We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do. In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience. Keep reading and thanks for listening! Kyle

Interpretability Tooling

2020-03-13 Listen
podcast_episode

Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.

Shapley Values

2020-03-06 Listen
podcast_episode

Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation.

Anchors as Explanations

2020-02-28 Listen
podcast_episode

We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME. In particular, we ask the question Are Red Roses Red? and discuss how Anchors provide high precision model-agnostic explanations. Please take our listener survey.

Adversarial Explanations

2020-02-14 Listen
podcast_episode

Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher.

ObjectNet

2020-02-07 Listen
podcast_episode

Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset. In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet. http://0xab.com/

Visualization and Interpretability

2020-01-31 Listen
podcast_episode

Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable. Find out more about Enrico at http://enrico.bertini.io/. More from Enrico with co-host Moritz Stefaner on the Data Stories podcast!

Interpretable One Shot Learning

2020-01-26 Listen
podcast_episode

We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only.

Fooling Computer Vision

2020-01-22 Listen
podcast_episode

Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person.  Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.

Algorithmic Fairness

2020-01-14 Listen
podcast_episode
Aaron Roth (University of Pennsylvania) , Kyle Polich

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

Interpretability

2020-01-07 Listen
podcast_episode

Interpretability Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. Music Our new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot.