talk-data.com talk-data.com

L

Speaker

Linhda

14

talks

guest

Frequent Collaborators

Filter by Event / Source

Talks & appearances

14 activities · Newest first

Search activities →

Today's show in two parts. First, Linhda joins us to review the episodes from Data Skeptic: Pilot Season and give her feedback on each of the topics. Second, we introduce our new segment "Orders of Magnitude". It's a statistical game show in which participants must identify the true statistic hidden in a list of statistics which are off by at least an order of magnitude. Claudia and Vanessa join as our first contestants.  Below are the sources of our questions. Heights https://en.wikipedia.org/wiki/Willis_Tower https://en.wikipedia.org/wiki/Eiffel_Tower https://en.wikipedia.org/wiki/GreatPyramidof_Giza https://en.wikipedia.org/wiki/InternationalSpaceStation Bird Statistics Birds in the US since 2000 Causes of Bird Mortality Amounts of Data Our statistics come from this post

Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database's transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube.   Thanks to this week's sponsors: Monday.com - Their Apps Challenge is underway and available at monday.com/dataskeptic

Brilliant - Check out their Quantum Computing Course, I highly recommend it! Other interesting topics I've seen are Neural Networks and Logic. Check them out at Brilliant.org/dataskeptic

On a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain. Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one. A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset.

One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples.  This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model. In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each.  Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data?  We discuss some of the reasons why and approaches to One Shot Learning.

This episode reviews the concept of k-d trees: an efficient data structure for holding multidimensional objects. Kyle gives Linhda a dictionary and asks her to look up words as a way of introducing the concept of binary search. We actually spend most of the episode talking about binary search before getting into k-d trees, but this is a necessary prerequisite.

There are several factors that are important to selecting an appropriate sample size and dealing with small samples. The most important questions are around representativeness - how well does your sample represent the total population and capture all it's variance? Linhda and Kyle talk through a few examples including elections, picking an Airbnb, produce selection, and home shopping as examples of cases in which the amount of observations one has are more or less important depending on how complex the underlying system one is observing is.

More features are not always better! With an increasing number of features to consider, machine learning algorithms suffer from the curse of dimensionality, as they have a wider set and often sparser coverage of examples to consider. This episode explores a real life example of this as Kyle and Linhda discuss their thoughts on purchasing a home. The curse of dimensionality was defined by Richard Bellman, and applies in several slightly nuanced cases. This mini-episode discusses how it applies on machine learning. This episode does not, however, discuss a slightly different version of the curse of dimensionality which appears in decision theoretic situations. Consider the game of chess. One must think ahead several moves in order to execute a successful strategy. However, thinking ahead another move requires a consideration of every possible move of every piece controlled, and every possible response one's opponent may take. The space of possible future states of the board grows exponentially with the horizon one wants to look ahead to. This is present in the notably useful Bellman equation.

For our 50th episode we enduldge a bit by cooking Linhda's previously mentioned "healthy" cornbread.  This leads to a discussion of the statistical topic of overdispersion in which the variance of some distribution is larger than what one's underlying model will account for.

Linhda and Kyle talk about Decision Tree Learning in this miniepisode.  Decision Tree Learning is the algorithmic process of trying to generate an optimal decision tree to properly classify or forecast some future unlabeled element based by following each step in the tree.