talk-data.com talk-data.com

Event

Data Skeptic

2014-05-23 – 2025-11-23 Podcasts Visit website ↗

Activities tracked

394

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

Sessions & talks

Showing 101–125 of 394 · Newest first

Search within this event →

The Limits of NLP

2019-12-24 Listen
podcast_episode
NLP

We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".

Jumpstart Your ML Project

2019-12-15 Listen
podcast_episode

Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.

Serverless NLP Model Training

2019-12-10 Listen
podcast_episode

Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline.  The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.

Team Data Science Process

2019-12-03 Listen
podcast_episode

Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.  

Ancient Text Restoration

2019-12-01 Listen
podcast_episode

Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

Annotator Bias

2019-11-23 Listen
podcast_episode

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand.  Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias  

NLP for Developers

2019-11-20 Listen
podcast_episode
Kyle Polich , Lance Olson (Microsoft)

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.

Indigenous American Language Research

2019-11-13 Listen
podcast_episode
NLP

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

Talking to GPT-2

2019-10-31 Listen
podcast_episode

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

Reproducing Deep Learning Models

2019-10-23 Listen
podcast_episode

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

What BERT is Not

2019-10-14 Listen
podcast_episode
NLP

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

SpanBERT

2019-10-08 Listen
podcast_episode

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

BERT is Shallow

2019-09-23 Listen
podcast_episode

Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

BERT is Magic

2019-09-16 Listen
podcast_episode

Kyle pontificates on how impressed he is with BERT.

Applied Data Science in Industry

2019-09-06 Listen
podcast_episode

Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.

Building the howto100m Video Corpus

2019-08-19 Listen
podcast_episode

Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen. This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities. Related Links The paper will be presented at ICCV 2019 @antoine77340 Antoine on Github Antoine's homepage

BERT

2019-07-29 Listen
podcast_episode
NLP

Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.

Catastrophic Forgetting

2019-07-15 Listen
podcast_episode

Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting.

Transfer Learning

2019-07-08 Listen
podcast_episode

Sebastian Ruder is a research scientist at DeepMind.  In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it.

Facebook Bargaining Bots Invented a Language

2019-06-21 Listen
podcast_episode

In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.

Under Resourced Languages

2019-06-15 Listen
podcast_episode
NLP

Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English.  Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models.  For languages that researchers have not paid as much attention to, these tools are not always available.

Named Entity Recognition

2019-06-08 Listen
podcast_episode

Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER.  NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).

The Death of a Language

2019-06-01 Listen
podcast_episode

USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.