A year in recap.
talk-data.com
Topic
NLP
Natural Language Processing (NLP)
252
tagged
Activity Trend
Top Events
We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".
Big Data Analytics Methods unveils secrets to advanced analytics techniques ranging from machine learning, random forest classifiers, predictive modeling, cluster analysis, natural language processing (NLP), Kalman filtering and ensembles of models for optimal accuracy of analysis and prediction. More than 100 analytics techniques and methods provide big data professionals, business intelligence professionals and citizen data scientists insight on how to overcome challenges and avoid common pitfalls and traps in data analytics. The book offers solutions and tips on handling missing data, noisy and dirty data, error reduction and boosting signal to reduce noise. It discusses data visualization, prediction, optimization, artificial intelligence, regression analysis, the Cox hazard model and many analytics using case examples with applications in the healthcare, transportation, retail, telecommunication, consulting, manufacturing, energy and financial services industries. This book's state of the art treatment of advanced data analytics methods and important best practices will help readers succeed in data analytics.
Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline. The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.
The rise of machine learning has placed a premium on finding new sources of data to fuel predictive models. But acquiring external data is often expensive and many data sets are rife with errors and difficult to combine with internal data. But that’s going to change in 2020.
To help us understand the scale, scope, and dimensions of emerging data marketplaces is Justin Langseth, one of the visionaries in our space. Justin is a VP at Snowflake responsible for the Snowflake Data Exchange. Prior to Snowflake, Justin was the technical founder and CEO/CTO of 5 data technology startups: Claraview (sold to Teradata), Zoomdata (sold to Logi Analytics), Clarabridge, Strategy.com, and Augaroo. He has 25 years of experience in business intelligence, natural language processing, big data, and AI.
The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias
While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.
Manuel Mager joins us to discuss natural language processing for low and under-resourced languages. We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.
As your business tries to make sense of today’s staggering amount of structured and unstructured data, traditional analytics will take you only so far. The key to success over the next few years will depend on augmented analytics, a method that embeds machine learning and natural language processing (NLP) in the process. This report explains how augmented analytics can help you uncover hidden insights, predict results, and even prescribe solutions. Author Alice LaPlante provides best practices for deploying augmented analytics, along with real-world case studies that show you how to take full advantage of this method. IT professionals, business managers, and CFOs will learn ways to democratize data use among business users and executives, using a self-service model. The future belongs to those who can get more from their data. This report shows you how. Get a primer on the key components and learn how they work together Delve into the benefits of—and roadblocks to—adopting augmented analytics Learn how companies use this method in marketing, sales, finance, and human resources Examine case studies of companies including Accenture and Riverbed
Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.
This IBM® Redpaper publication helps the line of business (LOB), data science, and information technology (IT) teams develop an information architecture (IA) for their enterprise artificial intelligence (AI) environment. It describes the challenges that are faced by the three roles when creating and deploying enterprise AI solutions, and how they can collaborate for best results. This publication also highlights the capabilities of the IBM Cognitive Systems and AI solutions: IBM Watson® Machine Learning Community Edition IBM Watson Machine Learning Accelerator (WMLA) IBM PowerAI Vision IBM Watson Machine Learning IBM Watson Studio Local IBM Video Analytics H2O Driverless AI IBM Spectrum® Scale IBM Spectrum Discover This publication examines the challenges through five different use case examples: Artificial vision Natural language processing (NLP) Planning for the future Machine learning (ML) AI teaming and collaboration This publication targets readers from LOBs, data science teams, and IT departments, and anyone that is interested in understanding how to build an IA to support enterprise AI development and deployment.
Learn how to fuse today's data science tools and techniques with your SAP enterprise resource planning (ERP) system. With this practical guide, SAP veterans Greg Foss and Paul Modderman demonstrate how to use several data analysis tools to solve interesting problems with your SAP data. Data engineers and scientists will explore ways to add SAP data to their analysis processes, while SAP business analysts will learn practical methods for answering questions about the business. By focusing on grounded explanations of both SAP processes and data science tools, this book gives data scientists and business analysts powerful methods for discovering deep data truths. You'll explore: Examples of how data analysis can help you solve several SAP challenges Natural language processing for unlocking the secrets in text Data science techniques for data clustering and segmentation Methods for detecting anomalies in your SAP data Data visualization techniques for making your data come to life
Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.
Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available.
Deep Learning for Search teaches you how to improve the effectiveness of your search by implementing neural network-based techniques. By the time you're finished with the book, you'll be ready to build amazing search engines that deliver the results your users need and that get better as time goes on! About the Technology Deep learning handles the toughest search challenges, including imprecise search terms, badly indexed data, and retrieving images with minimal metadata. And with modern tools like DL4J and TensorFlow, you can apply powerful DL techniques without a deep background in data science or natural language processing (NLP). This book will show you how. About the Book Deep Learning for Search teaches you to improve your search results with neural networks. You’ll review how DL relates to search basics like indexing and ranking. Then, you’ll walk through in-depth examples to upgrade your search with DL techniques using Apache Lucene and Deeplearning4j. As the book progresses, you’ll explore advanced topics like searching through images, translating user queries, and designing search engines that improve as they learn! What's Inside Accurate and relevant rankings Searching across languages Content-based image search Search with recommendations About the Reader For developers comfortable with Java or a similar language and search basics. No experience with deep learning or NLP needed. About the Author Tommaso Teofili is a software engineer with a passion for open source and machine learning. As a member of the Apache Software Foundation, he contributes to a number of open source projects, ranging from topics like information retrieval (such as Lucene and Solr) to natural language processing and machine translation (including OpenNLP, Joshua, and UIMA). He currently works at Adobe, developing search and indexing infrastructure components, and researching the areas of natural language processing, information retrieval, and deep learning. He has presented search and machine learning talks at conferences including BerlinBuzzwords, International Conference on Computational Science, ApacheCon, EclipseCon, and others. You can find him on Twitter at @tteofili. Quotes A practical approach that shows you the state of the art in using neural networks, AI, and deep learning in the development of search engines. - From the Foreword by Chris Mattmann, NASA JPL A thorough and thoughtful synthesis of traditional search and the latest advancements in deep learning. - Greg Zanotti, Marquette Partners A well-laid-out deep dive into the latest technologies that will take your search engine to the next level. - Andrew Wyllie, Thynk Health Hands-on exercises teach you how to master deep learning for search-based products. - Antonio Magnaghi, System1
To really learn data science, you should not only master the tools—data science libraries, frameworks, modules, and toolkits—but also understand the ideas and principles underlying them. Updated for Python 3.6, this second edition of Data Science from Scratch shows you how these tools and algorithms work by implementing them from scratch. If you have an aptitude for mathematics and some programming skills, author Joel Grus will help you get comfortable with the math and statistics at the core of data science, and with the hacking skills you need to get started as a data scientist. Packed with new material on deep learning, statistics, and natural language processing, this updated book shows you how to find the gems in today’s messy glut of data. Get a crash course in Python Learn the basics of linear algebra, statistics, and probability—and how and when they’re used in data science Collect, explore, clean, munge, and manipulate data Dive into the fundamentals of machine learning Implement models such as k-nearest neighbors, Naïve Bayes, linear and logistic regression, decision trees, neural networks, and clustering Explore recommender systems, natural language processing, network analysis, MapReduce, and databases
ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.
Machine Learning with R Quick Start Guide takes you through the foundations of machine learning using the R programming language. Starting with the basics, this book introduces key algorithms and methodologies, offering hands-on examples and applicable machine learning solutions that allow you to extract insights and create predictive models. What this Book will help me do Understand the basics of machine learning and apply them using R 3.5. Learn to clean, prepare, and visualize data with R to ensure robust data analysis. Develop and work with predictive models using various machine learning techniques. Discover advanced topics like Natural Language Processing and neural network training. Implement end-to-end pipeline solutions, from data collection to predictive analytics, in R. Author(s) None Sanz, the author of Machine Learning with R Quick Start Guide, is an expert in data science with years of experience in the field of machine learning and R programming. Known for their accessible and detailed teaching style, the author focuses on providing practical knowledge to empower readers in the real world. Who is it for? This book is ideal for graduate students and professionals, including aspiring data scientists and data analysts, looking to start their journey in machine learning. Readers are expected to have some familiarity with the R programming language but no prior machine learning experience is necessary. With this book, the audience will gain the ability to confidently navigate machine learning concepts and practices.
Extract actionable insights from text and unstructured data. Information extraction is the task of automatically extracting structured information from unstructured or semi-structured text. SAS Text Analytics for Business Applications: Concept Rules for Information Extraction Models focuses on this key element of natural language processing (NLP) and provides real-world guidance on the effective application of text analytics. Using scenarios and data based on business cases across many different domains and industries, the book includes many helpful tips and best practices from SAS text analytics experts to ensure fast, valuable insight from your textual data. Written for a broad audience of beginning, intermediate, and advanced users of SAS text analytics products, including SAS Visual Text Analytics, SAS Contextual Analysis, and SAS Enterprise Content Categorization, this book provides a solid technical reference. You will learn the SAS information extraction toolkit, broaden your knowledge of rule-based methods, and answer new business questions. As your practical experience grows, this book will serve as a reference to deepen your expertise.
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder. The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings. In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning. Related Links tf-seq2seq Describing Multimedia Content using Attention-based Encoder--Decoder Networks Show and Tell: A Neural Image Caption Generator Attend to You: Personalized Image Captioning with Context Sequence Memory Networks