talk-data.com talk-data.com

Event

Data Skeptic

2014-05-23 – 2025-11-23 Podcasts Visit website ↗

Activities tracked

6

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

Filtering by: Analytics ×

Sessions & talks

Showing 1–6 of 6 · Newest first

Search within this event →

Face Mask Sentiment Analysis

2020-11-27 Listen
podcast_episode
Kyle Polich , Jonathan Lai (University of Rochester) , Jiebo Luo (University of Rochester) , Neil Yeung (University of Rochester)

As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds? Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic. Works Mentioned https://arxiv.org/abs/2011.00336 Emails: Neil Yeung [email protected] Jonathan Lia [email protected] Jiebo Luo [email protected] Thanks to our sponsors! Springboard School of Data offers a comprehensive career program encompassing data science, analytics, engineering, and Machine Learning. All courses are online and tailored to fit the lifestyle of working professionals. Up to 20 Data Skeptic listeners will receive $500 scholarships. Apply today at springboard.com/datasketpic Check out Brilliant's group theory course to learn about object-oriented design! Brilliant is great for learning something new or to get an easy-to-look-at review of something you already know. Check them out a Brilliant.org/dataskeptic to get 20% off of a year of Brilliant Premium!

Interpretability

2020-01-07 Listen
podcast_episode

Interpretability Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. Music Our new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot.

Data Ethics

2018-12-07 Listen
podcast_episode

This week, Kyle interviews Scott Nestler on the topic of Data Ethics. Today, no ubiquitous, formal ethical protocol exists for data science, although some have been proposed. One example is the INFORMS Ethics Guidelines. Guidelines like this are rather informal compared to other professions, like the Hippocratic Oath. Yet not every profession requires such a formal commitment. In this episode, Scott shares his perspective on a variety of ethical questions specific to data and analytics.

The Computational Complexity of Machine Learning

2017-11-03 Listen
podcast_episode
Kyle Polich , Michael Kearns (University of Pennsylvania)

In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems. When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it's going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms? A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it's learnable in polynomial time. Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they're balancing the complexity with the goodness of fit to fit their large training sample size. As mentioned, a good resource for getting started with correlated equilibria is: https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf Thanks to our sponsors: Mendoza College of Business - Get your Masters of Science in Business Analytics from Notre Dame. brilliant.org - A fun, affordable, online learning tool.  Check out their Computer Science Algorithms course.

Predictive Policing

2016-06-24 Listen
podcast_episode
Kyle Polich , Kristian Lum (HRDAG (Human Rights Data Analysis Group))

Kristian Lum (@KLdivergence) joins me this week to discuss her work at @hrdag on predictive policing. We also discuss Multiple Systems Estimation, a technique for inferring statistical information about a population from separate sources of observation. If you enjoy this discussion, check out the panel Tyranny of the Algorithm? Predictive Analytics & Human Rights which was mentioned in the episode.

Video Game Analytics

2015-06-19 Listen
podcast_episode

This episode discusses video game analytics with guest Anders Drachen. The way in which people get access to games and the opportunity for game designers to ask interesting questions with data has changed quite a bit in the last two decades. Anders shares his insights about the past, present, and future of game analytics. We explore not only some of the innovations and interesting ways of examining user experience in the gaming industry, but also touch on some of the exciting opportunities for innovation that are right on the horizon. You can find more from Anders online at andersdrachen.com, and follow him on twitter @andersdrachen