talk-data.com talk-data.com

Topic

Data Science

machine_learning statistics analytics

1516

tagged

Activity Trend

68 peak/qtr
2020-Q1 2026-Q1

Activities

1516 activities · Newest first

Covid-19: Big Data Analytics and Artificial Intelligence by Cristian Randieri

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

Graph Processing for Open Metadata and Governance by Mandy Chessell

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

Stopping Public Transport Coronavirus Infections with Big Data by Tim Frey

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

The Intuition Behind The Use of M.L. in Marketing Analytics by Mario A Vinasco

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

Trust and Quality in Era of Software 2.0 by Yiannis Kanellopoulos

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

Искуственный интеллект и решения оптимизационных задач в физических науках - Андрей Устюжани

Big Data Days Онсайт и онлайн 22-25 ноября, 2022 Узнать больше о конференции: https://bit.ly/30YNt99 Присоединяйтесь к нашей следующей конференции Big Data Days 22-25 ноября в 2022 г. Здесь вы сможете получить знания от мировых экспертов, выступающих с техническими докладами и практическими мастер-классами в области Big Data, High Load, Data Science, Machine Learning и AI. В этом году конференция будет проходить в гибридной форме, это позволит вам послушать доклады и посетить мастер-классы онсайт и онлайн.

Summary Most of the time when you think about a data pipeline or ETL job what comes to mind is a purely mechanistic progression of functions that move data from point A to point B. Sometimes, however, one of those transformations is actually a full-fledged machine learning project in its own right. In this episode Tal Galfsky explains how he and the team at Cherre tackled the problem of messy data for Addresses by building a natural language processing and entity resolution system that is served as an API to the rest of their pipelines. He discusses the myriad ways that addresses are incomplete, poorly formed, and just plain wrong, why it was a big enough pain point to invest in building an industrial strength solution for it, and how it actually works under the hood. After listening to this you’ll look at your data pipelines in a new light and start to wonder how you can bring more advanced strategies into the cleaning and transformation process.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask. RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Your host is Tobias Macey and today I’m interviewing Tal Galfsky about how Cherre is bringing order to the messy problem of physical addresses and entity resolution in their data pipelines.

Interview

Introduction How did you get involved in the area of data management? Started as physicist and evolved into Data Science Can you start by giving a brief recap of what Cherre is and the types of data that you deal with? Cherre is a company that connects data We’re not a data vendor, in that we don’t sell data, primarily We help companies connect and make sense of their data The real estate market is historically closed, gut let, behind on tech What are the biggest challenges that you deal with in your role when working with real estate data? Lack of a standard domain model in real estate. Ontology. What is a property? Each data source, thinks about properties in a very different way. Therefore, yielding similar, but completely different data. QUALITY (Even if the dataset are talking about the same thing, there are different levels of accuracy, freshness). HIREARCHY. When is one source better than another What are the teams and systems that rely on address information? Any company that needs to clean or organize (make sense) their data, need to identify, people, companies, and properties. Our clients use Address resolution in multiple ways. Via the UI or via an API. Our service is both external and internal so what I build has to be good enough for the demanding needs of our data science team, robust enough for our engineers, and simple enough that non-expert clients can use it. Can you give an example for the problems involved in entity resolution Known entity example. Empire state buidling. To resolve addresses in a way that makes sense for the client you need to capture the real world entities. Lots, buildings, units.

Identify the type of the object (lot, building, unit) Tag the object with all the relevant addresses Relations to other objects (lot, building, unit)

What are some examples of the kinds of edge cases or messiness that you encounter in addresses? First class is string problems. Second class component problems. third class is geocoding. I understand that you have developed a service for normalizing addresses and performing entity resolution to provide canonical references for downstream analyses. Can you give an overview of what is involved? What is the need for the service. The main requirement here is connecting an address to lot, building, unit with latitude and longitude coordinates

How were you satisfying this requirement previously? Before we built our model and dedicated service we had a basic prototype for pipeline only to handle NYC addresses. What were the motivations for designing and implementing this as a service? Need to expand nationwide and to deal with client queries in real time. What are some of the other data sources that you rely on to be able to perform this normalization and resolution? Lot data, building data, unit data, Footprints and address points datasets. What challenges do you face in managing these other sources of information? Accuracy, hirearchy, standardization, unified solution, persistant ids and primary keys

Digging into the specifics of your solution, can you talk through the full lifecycle of a request to resolve an address and the various manipulations that are performed on it? String cleaning, Parse and tokenize, standardize, Match What are some of the other pieces of information in your system that you would like to see addressed in a similar fashion? Our named entity solution with connection to knowledge graph and owner unmasking. What are some of the most interesting, unexpected, or challenging lessons that you learned while building this address resolution system? Scaling nyc geocode example. The NYC model was exploding a subset of the options for messing up an address. Flexibility. Dependencies. Client exposure. Now that you have this system running in production, if you were to start over today what would you do differently? a lot but at this point the module boundaries and client interface are defined in such way that we are able to make changes or completely replace any given part of it without breaking anything client facing What are some of the other projects that you are excited to work on going forward? Named entity resolution and Knowledge Graph

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today? BigQuery is huge asset and in particular UDFs but they don’t support API calls or python script

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

Cherre

Podcast Episode

Photonics Knowledge Graph Entity Resolution BigQuery NLP == Natural Language Processing dbt

Podcast Episode

Airflow

Podcast.init Episode

Datadog

Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

We talked about:

Andrada’s background

Recommended courses Kaggle and StackOverflow Doing notebooks on Kaggle Projects for learning data science Finding a job and a mentor with Kaggle’s help The process for looking for a job Main difficulties of getting a job Project portfolio and Kaggle Helpful analytical skills for transitioning into data science Becoming better at coding Learning by imitating Is doing masters helpful? Getting into data science without a masters Kaggle is not just about competitions The last tip: use social media

Links:

https://www.kaggle.com/andradaolteanu  https://twitter.com/andradaolteanuu https://www.linkedin.com/in/andrada-olteanu-3806a2132/

Join DataTalks.Club: https://datatalks.club/slack.html

We talked about:

Knesia’s background Data analytics vs data science Skills needed for data analytics and data science Benefits of getting a masters degree Useful online courses How project management background can be helpful for the career transition Which skills do PMs need to become data analysts? Going from working with spreadsheets to working with python Kaggle Productionizing machine learning models Getting experience while studying Looking for a job Gap between theory and practice Learning plan for transitioning Last tips and getting involved in projects

Links:

Notes prepared by Ksenia with all the info: https://www.notion.so/ksenialeg/DataTalks-Club-7597e55f476040a5921db58d48cf718f

Join DataTalks.Club: https://datatalks.club/slack.html

Data Science on AWS

With this practical book, AI and machine learning practitioners will learn how to successfully build and deploy data science projects on Amazon Web Services. The Amazon AI and machine learning stack unifies data science, data engineering, and application development to help level up your skills. This guide shows you how to build and run pipelines in the cloud, then integrate the results into applications in minutes instead of days. Throughout the book, authors Chris Fregly and Antje Barth demonstrate how to reduce cost and improve performance. Apply the Amazon AI and ML stack to real-world use cases for natural language processing, computer vision, fraud detection, conversational devices, and more Use automated machine learning to implement a specific subset of use cases with SageMaker Autopilot Dive deep into the complete model development lifecycle for a BERT-based NLP use case including data ingestion, analysis, model training, and deployment Tie everything together into a repeatable machine learning operations pipeline Explore real-time ML, anomaly detection, and streaming analytics on data streams with Amazon Kinesis and Managed Streaming for Apache Kafka Learn security best practices for data science projects and workflows including identity and access management, authentication, authorization, and more

Visualizing Data in R 4: Graphics Using the base, graphics, stats, and ggplot2 Packages

Master the syntax for working with R’s plotting functions in graphics and stats in this easy reference to formatting plots. The approach in Visualizing Data in R 4 toward the application of formatting in ggplot() will follow the structure of the formatting used by the plotting functions in graphics and stats. This book will take advantage of the new features added to R 4 where appropriate including a refreshed color palette for charts, Cairo graphics with more fonts/symbols, and improved performance from grid graphics including ggplot 2 rendering speed. Visualizing Data in R 4 starts with an introduction and then is split into two parts and six appendices. Part I covers the function plot() and the ancillary functions you can use with plot(). You’ll also see the functions par() and layout(), providing for multiple plots on a page. Part II goes over the basics of using the functions qplot() and ggplot() in the package ggplot2. The default plots generated by the functions qplot() and ggplot() give more sophisticated-looking plots than the default plots done by plot() and are easier to use, but the function plot() is more flexible. Both plot() and ggplot() allow for many layers to a plot. The six appendices will cover plots for contingency tables, plots for continuous variables, plots for data with a limited number of values, functions that generate multiple plots, plots for time series analysis, and some miscellaneous plots. Some of the functions that will be in the appendices include functions that generate histograms, bar charts, pie charts, box plots, and heatmaps. What You Will Learn Use R to create informative graphics Master plot(), qplot(), and ggplot() Discover the canned graphics functions in stats and graphics Format plots generated by plot() and ggplot() Who This Book Is For Those in data science who use R. Some prior experience with R or data science is recommended.

Cleaning Data for Effective Data Science

Dive into the intricacies of data cleaning, a crucial aspect of any data science and machine learning pipeline, with 'Cleaning Data for Effective Data Science.' This comprehensive guide walks you through tools and methodologies like Python, R, and command-line utilities to prepare raw data for analysis. Learn practical strategies to manage, clean, and refine data encountered in the real world. What this Book will help me do Understand and utilize various data formats such as JSON, SQL, and PDF for data ingestion and processing. Master key tools like pandas, SciPy, and Tidyverse to manipulate and analyze datasets efficiently. Develop heuristics and methodologies for assessing data quality, detecting bias, and identifying irregularities. Apply advanced techniques like feature engineering and statistical adjustments to enhance data usability. Gain confidence in handling time series data by employing methods for de-trending and interpolating missing values. Author(s) David Mertz has years of experience as a Python programmer and data scientist. Known for his engaging and accessible teaching style, David has authored numerous technical articles and books. He emphasizes not only the technicalities of data science tools but also the critical thinking that approaches solutions creatively and effectively. Who is it for? 'Cleaning Data for Effective Data Science' is designed for data scientists, software developers, and educators dealing with data preparation. Whether you're an aspiring data enthusiast or an experienced professional looking to refine your skills, this book provides essential tools and frameworks. Prior programming knowledge, particularly in Python or R, coupled with an understanding of statistical fundamentals, will help you make the most of this resource.

Data Science for Supply Chain Forecasting

Using data science in order to solve a problem requires a scientific mindset more than coding skills. Data Science for Supply Chain Forecasting, Second Edition contends that a true scientific method which includes experimentation, observation, and constant questioning must be applied to supply chains to achieve excellence in demand forecasting. This second edition adds more than 45 percent extra content with four new chapters including an introduction to neural networks and the forecast value added framework. Part I focuses on statistical "traditional" models, Part II, on machine learning, and the all-new Part III discusses demand forecasting process management. The various chapters focus on both forecast models and new concepts such as metrics, underfitting, overfitting, outliers, feature optimization, and external demand drivers. The book is replete with do-it-yourself sections with implementations provided in Python (and Excel for the statistical models) to show the readers how to apply these models themselves. This hands-on book, covering the entire range of forecasting—from the basics all the way to leading-edge models—will benefit supply chain practitioners, forecasters, and analysts looking to go the extra mile with demand forecasting. Events around the book Link to a De Gruyter Online Event in which the author Nicolas Vandeput together with Stefan de Kok, supply chain innovator and CEO of Wahupa; Spyros Makridakis, professor at the University of Nicosia and director of the Institute For the Future (IFF); and Edouard Thieuleux, founder of AbcSupplyChain, discuss the general issues and challenges of demand forecasting and provide insights into best practices (process, models) and discussing how data science and machine learning impact those forecasts. The event will be moderated by Michael Gilliland, marketing manager for SAS forecasting software: https://youtu.be/1rXjXcabW2s

We talked about:

Ben’s background AI evangelism Ben’s first experiences speaking in public Becoming a great speaker  Key Takeaways and Call to Action Making a good introduction Being Remembered Writing a talk proposal for conferences Landing a keynote Good topics to start talks on Pitching a solution talk to meetup organizers Top public speaking skill to acquire Book recommendations

Join DataTalks.Club: https://datatalks.club/slack.html​​​

Forecasting Time Series Data with Facebook Prophet

Delve into the art of time series forecasting with the comprehensive power of Facebook Prophet. This tool enables users to develop precise forecasting models with simplicity and effectiveness. Through this book, you'll explore Prophet's core functionality and advanced configurations, equipping yourself with the knowledge to proficiently model and predict data trends. What this Book will help me do Build intuitive and effective forecasting models using Facebook Prophet. Understand the role and implementation of seasonality and holiday effects in time series data. Identify and address outliers and special data events effectively. Optimize forecasts using advanced techniques like hyperparameter tuning and additional regressors. Evaluate and deploy forecasting models in production settings for practical applications. Author(s) Greg Rafferty is a seasoned data science professional with extensive experience in time series forecasting. Having worked on diverse forecasting projects, Greg brings a unique perspective that integrates practicality and depth. His approachable writing style makes complex topics accessible and actionable. Who is it for? This book is tailored for data scientists, analysts, and developers seeking to enhance their forecasting capabilities using Python. If you have a grounding in Python and a basic understanding of forecasting principles, you will find this book a valuable resource to sharpen your expertise and achieve new forecasting precision.

Data Science Revealed: With Feature Engineering, Data Visualization, Pipeline Development, and Hyperparameter Tuning

Get insight into data science techniques such as data engineering and visualization, statistical modeling, machine learning, and deep learning. This book teaches you how to select variables, optimize hyper parameters, develop pipelines, and train, test, and validate machine and deep learning models. Each chapter includes a set of examples allowing you to understand the concepts, assumptions, and procedures behind each model. The book covers parametric methods or linear models that combat under- or over-fitting using techniques such as Lasso and Ridge. It includes complex regression analysis with time series smoothing, decomposition, and forecasting. It takes a fresh look at non-parametric models for binary classification (logistic regression analysis) and ensemble methods such as decision trees, support vector machines, and naive Bayes. It covers the most popular non-parametric method for time-event data (the Kaplan-Meier estimator). It also covers ways of solving classification problems using artificial neural networks such as restricted Boltzmann machines, multi-layer perceptrons, and deep belief networks. The book discusses unsupervised learning clustering techniques such as the K-means method, agglomerative and Dbscan approaches, and dimension reduction techniques such as Feature Importance, Principal Component Analysis, and Linear Discriminant Analysis. And it introduces driverless artificial intelligence using H2O. After reading this book, you will be able to develop, test, validate, and optimize statistical machine learning and deep learning models, and engineer, visualize, and interpret sets of data. What You Will Learn Design, develop, train, and validate machine learning and deep learning models Find optimal hyper parameters for superior model performance Improve model performance using techniques such as dimension reduction and regularization Extract meaningful insights for decision making using data visualization Who This Book Is For Beginning and intermediate level data scientists and machine learning engineers

Stakeholders often miss key insights that can be provided by data to drive action forward, due to the way the data is presented and communicated to them. My guest today believes that data storytelling is key to resolving this common pain point. Kam Lee, a BI Data Storytelling Mastery alumn and graduate who has used our framework to surface over $100M for the fintech company he works with! Kam is the Chief Data Scientist at his company Finetooth Analytics (specializing in marketing analytics), working with top marketers like Russell Brunson from Clickfunnels! Our data masterclass with Kam today delves deep into how he used our BI Data Storytelling Methodology and framework to straddle data engineering, data science, and storytelling. Kam shares game-changing concepts from the course and how he has used them to connect to stakeholders, influence their actions, and overcoming what he calls 'emotional responses' to data. Tune in to this knowledge bomb-filled episode! In this episode, you'll learn: [0:12:20] Three buckets Kam uses to organize the data storytelling process. [0:14:56] The challenge of dealing with stakeholders who respond emotionally to data. [0:26:48] Whether to start with the storyboarding or the analytics data dictionary first. [0:28:19] The difference between KPIs, trends, and actions. For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/76    Enjoyed the Show?  Please leave us a review on iTunes.

Summary The process of building and deploying machine learning projects requires a staggering number of systems and stakeholders to work in concert. In this episode Yaron Haviv, co-founder of Iguazio, discusses the complexities inherent to the process, as well as how he has worked to democratize the technologies necessary to make machine learning operations maintainable.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask. RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Your host is Tobias Macey and today I’m interviewing Yaron Haviv about Iguazio, a platform for end to end automation of machine learning applications using MLOps principles.

Interview

Introduction How did you get involved in the area of data science & analytics? Can you start by giving an overview of what Iguazio is and the story of how it got started? How would you characterize your target or typical customer? What are the biggest challenges that you see around building production grade workflows for machine learning?

How does Iguazio help to address those complexities?

For customers who have already invested in the technical and organizational capacity for data science and data engineering, how does Iguazio integrate with their environments? What are the responsibilities of a data engineer throughout the different stages of the lifecycle for a machine learning application? Can you describe how the Iguazio platform is architected?

How has the design of the platform evolved since you first began working on it? How have the industry best practices around bringing machine learning to production changed?

How do you approach testing/validation of machine learning applications and releasing them to production environments? (e.g. CI/CD) Once a model is in

Did you know that there are 3 types different types of data scientists? A for analyst, B for builder, and C for consultant - we discuss the key differences between each one and some learning strategies you can use to become A, B, or C.

We talked about:

Inspirations for memes  Danny's background and career journey The ABCs of data science - the story behind the idea Data scientist type A - Analyst  Skills, responsibilities, and background for type A Transitioning from data analytics to type A data scientist (that's the path Danny took) How can we become more curious? Data scientist B - Builder  Responsibilities and background for type B Transitioning from type A to type B Most important skills for type B Why you have to learn more about cloud  Data scientist type C - consultant Skills, responsibilities, and background for type C Growing into the C type Ideal data science team Important business metrics Getting a job - easier as type A or type B? Looking for a job without experience Two approaches for job search: "apply everywhere" and "apply nowhere" Are bootcamps useful? Learning path to becoming a data scientist Danny's data apprenticeship program and "Serious SQL" course  Why SQL is the most important skill R vs Python Importance of Masters and PhD

Links:

Danny's profile on LinkedIn: https://linkedin.com/in/datawithdanny Danny's course: https://datawithdanny.com/ Trailer: https://www.linkedin.com/posts/datawithdanny_datascientist-data-activity-6767988552811847680-GzUK/ Technical debt paper: https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html

Join DataTalks.Club: https://datatalks.club/slack.html

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Hosted by Al Martin, VP, IBM Expert Services Delivery, Making Data Simple provides the latest thinking on big data, A.I., and the implications for the enterprise from a range of experts. This week on Making Data Simple, we have Kristen Summers and John Thomas. Kristen is a Distinguished Engineer in Cloud and Cognitive Expert Labs. Kristen has worked in Artificial Intelligence and Data Science, PHD in Computer Science, and leads Data Science within our Expert Labs. John is a Distinguished Engineer in Data and Expert Labs, John leads Services that helps clients establish the AI factory.

Show Notes 3:24 – What is the AI academy and how does it all fit together? 4:34 – AI Ladder and AI Maturity 8:32 – How does the AI Factory make it easier to accomplish the AI Ladder? 12:00 – Why does your team do it better? 17:03 – How do you know your data is ready? 21:22 – What is the most practical use case? 23:02 – What does it really mean to infuse AI? 25:15 – Definition of AI maturity curve 28:25 – How do you know it’s trustworthy? 29:14 – What the most important lesson you’ve learned with AI and what is AI not very good at? In the Dream House  Connect with the Team Producer Kate Brown - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.