Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use. The key algorithmic ideas involved in word2vec is the continuous bag of words model (CBOW). In this episode, Kyle uses excerpts from the 1983 cinematic masterpiece War Games, and challenges Linhda to guess a word Kyle leaves out of the transcript. This is similar to how word2vec is trained. It trains a neural network to predict a hidden word based on the words that appear before and after the missing location.
talk-data.com
Speaker
Linh Da Tran
10
talks
Frequent Collaborators
Filter by Event / Source
Talks & appearances
10 activities · Newest first
In mathematics, truth is universal. In data, truth lies in the where clause of the query. As large organizations have grown to rely on their data more significantly for decision making, a common problem is not being able to agree on what the data is. As the volume and velocity of data grow, challenges emerge in answering questions with precision. A simple question like "what was the revenue yesterday" could become mired in details. Did your query account for transactions that haven't been finalized? If I query again later, should I exclude orders that have been returned since the last query? What time zone should I use? The list goes on and on. In any large enough organization, you are also likely to find multiple copies if the same data. Independent systems might record the same information with slight variance. Sometimes systems will import data from other systems; a process which could become out of sync for several reasons. For any sufficiently large system, answering analytical questions with precision can become a non-trivial challenge. The business intelligence community aspires to provide a "single source of truth" - one canonical place where data consumers can go to get precise, reliable, and trusted answers to their analytical questions.
Thanks to our sponsor Galvanize A Kalman Filter is a technique for taking a sequence of observations about an object or variable and determining the most likely current state of that object. In this episode, we discuss it in the context of tracking our lilac crowned amazon parrot Yoshi. Kalman filters have many applications but the one of particular interest under our current theme of artificial intelligence is to efficiently update one's beliefs in light of new information. The Kalman filter is based upon the Gaussian distribution. This distribution is described by two parameters: (the mean) and standard deviation. The procedure for updating these values in light of new information has a closed form. This means that it can be described with straightforward formulae and computed very efficiently. You may gain a greater appreciation for Kalman filters by considering what would happen if you could not rely on the Gaussian distribution to describe your posterior beliefs. If determining the probability distribution over the variables describing some object cannot be efficiently computed, then by definition, maintaining the most up to date posterior beliefs can be a significant challenge. Kyle will be giving a talk at Skeptical 2018 in Berkeley, CA on June 10.
Thanks to our sponsor The Great Courses. This week's episode is a short primer on game theory. For tickets to the free Data Skeptic meetup in Chicago on Tuesday, May 15 at the Mendoza College of Business (224 South Michigan Avenue, Suite 350), click here,
In this episode, Kyle and Linhda discuss the theory of formal languages. Any language can (theoretically) be a formal language. The requirement is that the language can be rigorously described as a set of strings which are considered part of the language. Those strings are any combination of alphabet characters in the given language. Read more
What's the best machine learning algorithm to use? I hear that XGBoost wins most of the Kaggle competitions that aren't won with deep learning. Should I just use XGBoost all the time? That might work out most of the time in practice, but a proof exists which tells us that there cannot be one true algorithm to rule them.
In many real world situations, a person/agent doesn't necessarily know their own objectives or the mechanics of the world they're interacting with. However, if the agent receives rewards which are correlated with the both their actions and the state of the world, then reinforcement learning can be used to discover behaviors that maximize the reward earned.
Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples. Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.
In artificial intelligence, the term 'agent' is used to mean an autonomous, thinking agent with the ability to interact with their environment. An agent could be a person or a piece of software. In either case, we can describe aspects of the agent in a standard framework.
In this week's mini episode, Linhda and Kyle discuss Ant Colony Optimization - a numerical / stochastic optimization technique which models its search after the process ants employ in using random walks to find a goal (food) and then leaving a pheremone trail in their walk back to the nest. We even find some way of relating the city of San Francisco and running a restaurant into the discussion.