talk-data.com
People (151 results)
See all 151 →Activities & events
| Title & Speakers | Event |
|---|---|
|
Orders of Magnitude
2021-05-07 · 18:55
Today's show in two parts. First, Linhda joins us to review the episodes from Data Skeptic: Pilot Season and give her feedback on each of the topics. Second, we introduce our new segment "Orders of Magnitude". It's a statistical game show in which participants must identify the true statistic hidden in a list of statistics which are off by at least an order of magnitude. Claudia and Vanessa join as our first contestants. Below are the sources of our questions. Heights https://en.wikipedia.org/wiki/Willis_Tower https://en.wikipedia.org/wiki/Eiffel_Tower https://en.wikipedia.org/wiki/GreatPyramidof_Giza https://en.wikipedia.org/wiki/InternationalSpaceStation Bird Statistics Birds in the US since 2000 Causes of Bird Mortality Amounts of Data Our statistics come from this post |
|
|
ACID Compliance
2020-10-23 · 13:00
Kyle Polich
– host
,
Linhda
– guest
Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database's transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube. Thanks to this week's sponsors: Monday.com - Their Apps Challenge is underway and available at monday.com/dataskeptic Brilliant - Check out their Quantum Computing Course, I highly recommend it! Other interesting topics I've seen are Neural Networks and Logic. Check them out at Brilliant.org/dataskeptic |
|
|
Listener Survey Review
2020-08-11 · 17:01
Kyle Polich
– host
,
Linhda
– guest
In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results. |
|
|
Shapley Values
2020-03-06 · 20:29
Kyle Polich
– host
,
Linhda
– guest
Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation. |
|
|
Catastrophic Forgetting
2019-07-15 · 08:40
Kyle Polich
– host
,
Linhda
– guest
Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting. |
|
|
The Transformer
2019-05-03 · 15:31
Kyle Polich
– host
,
Linhda
– guest
Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case. |
|
|
word2vec
2019-02-01 · 16:00
Linh Da Tran
– guest
,
Kyle Polich
– host
Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use. The key algorithmic ideas involved in word2vec is the continuous bag of words model (CBOW). In this episode, Kyle uses excerpts from the 1983 cinematic masterpiece War Games, and challenges Linhda to guess a word Kyle leaves out of the transcript. This is similar to how word2vec is trained. It trains a neural network to predict a hidden word based on the words that appear before and after the missing location. |
|
|
Transfer Learning
2018-06-15 · 15:00
Kyle Polich
– host
,
Linhda
– guest
On a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain. Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one. A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset. |
|
|
The Theory of Formal Languages
2018-04-06 · 15:00
Linh Da Tran
– guest
,
Kyle Polich
– host
In this episode, Kyle and Linhda discuss the theory of formal languages. Any language can (theoretically) be a formal language. The requirement is that the language can be rigorously described as a set of strings which are considered part of the language. Those strings are any combination of alphabet characters in the given language. Read more |
|
|
[MINI] One Shot Learning
2017-09-22 · 15:00
Kyle Polich
– host
,
Linhda
– guest
One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples. This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model. In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each. Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data? We discuss some of the reasons why and approaches to One Shot Learning. |
|
|
[MINI] k-d trees
2016-02-05 · 15:13
Kyle Polich
– host
,
Linhda
– guest
This episode reviews the concept of k-d trees: an efficient data structure for holding multidimensional objects. Kyle gives Linhda a dictionary and asks her to look up words as a way of introducing the concept of binary search. We actually spend most of the episode talking about binary search before getting into k-d trees, but this is a necessary prerequisite. |
|
|
[MINI] Sample Sizes
2015-09-18 · 06:47
Kyle Polich
– host
,
Linhda
– guest
There are several factors that are important to selecting an appropriate sample size and dealing with small samples. The most important questions are around representativeness - how well does your sample represent the total population and capture all it's variance? Linhda and Kyle talk through a few examples including elections, picking an Airbnb, produce selection, and home shopping as examples of cases in which the amount of observations one has are more or less important depending on how complex the underlying system one is observing is. |
|
|
[MINI] The Curse of Dimensionality
2015-06-26 · 07:01
Kyle Polich
– host
,
Linhda
– guest
More features are not always better! With an increasing number of features to consider, machine learning algorithms suffer from the curse of dimensionality, as they have a wider set and often sparser coverage of examples to consider. This episode explores a real life example of this as Kyle and Linhda discuss their thoughts on purchasing a home. The curse of dimensionality was defined by Richard Bellman, and applies in several slightly nuanced cases. This mini-episode discusses how it applies on machine learning. This episode does not, however, discuss a slightly different version of the curse of dimensionality which appears in decision theoretic situations. Consider the game of chess. One must think ahead several moves in order to execute a successful strategy. However, thinking ahead another move requires a consideration of every possible move of every piece controlled, and every possible response one's opponent may take. The space of possible future states of the board grows exponentially with the horizon one wants to look ahead to. This is present in the notably useful Bellman equation. |
|
|
Kyle Polich
– host
,
Linhda
– guest
Linhda and Kyle review a New York Times article titled How Your Hometown Affects Your Chances of Marriage. This article explores research about what correlates with the likelihood of being married by age 26 by county. Kyle and LinhDa discuss some of the fine points of this research and the process of identifying factors for consideration. |
|
|
[MINI] Cornbread and Overdispersion
2015-04-24 · 07:19
Kyle Polich
– host
,
Linhda
– guest
For our 50th episode we enduldge a bit by cooking Linhda's previously mentioned "healthy" cornbread. This leads to a discussion of the statistical topic of overdispersion in which the variance of some distribution is larger than what one's underlying model will account for. |
|
|
Economic Modeling and Prediction, Charitable Giving, and a Follow Up with Peter Backus
2014-12-19 · 08:01
Kyle Polich
– host
,
Peter Backus
– Economist
Economist Peter Backus joins me in this episode to discuss a few interesting topics. You may recall Linhda and I previously discussed his paper "The Girlfriend Equation" on a recent mini-episode. We start by touching base on this fun paper and get a follow up on where Peter stands years after writing w.r.t. a successful romantic union. Additionally, we delve in to some fascinating economics topics. We touch on questions of the role models, for better or for worse, played a role in the ~2008 economic crash, statistics in economics and the difficulty of measurement, and some insightful discussion about the economics charities. Peter encourages listeners to be open to giving money to charities that are good at fundraising, and his arguement is a (for me) suprisingly insightful logic. Lastly, we have a teaser of some of Peter's upcoming work using unconventional data sources. For his benevolent recommendation, Peter recommended the book The Conquest of Happiness by Bertrand Russell, and for his self-serving recommendation, follow Peter on twitter at @Awesomnomics. |
|
|
[MINI] The Girlfriend Equation
2014-11-28 · 08:03
Kyle Polich
– host
,
Peter Backus
– Economist
Economist Peter Backus put forward "The Girlfriend Equation" while working on his PhD - a probabilistic model attempting to estimate the likelihood of him finding a girlfriend. In this mini episode we explore the soundness of his model and also share some stories about how Linhda and Kyle met. |
|
|
[MINI] Decision Tree Learning
2014-09-05 · 07:49
Kyle Polich
– host
,
Linhda
– guest
Linhda and Kyle talk about Decision Tree Learning in this miniepisode. Decision Tree Learning is the algorithmic process of trying to generate an optimal decision tree to properly classify or forecast some future unlabeled element based by following each step in the tree. |
|
|
[MINI] Ant Colony Optimization
2014-08-08 · 13:00
Linh Da Tran
– guest
,
Kyle Polich
– host
In this week's mini episode, Linhda and Kyle discuss Ant Colony Optimization - a numerical / stochastic optimization technique which models its search after the process ants employ in using random walks to find a goal (food) and then leaving a pheremone trail in their walk back to the nest. We even find some way of relating the city of San Francisco and running a restaurant into the discussion. |
|