talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Summary A large fraction of data engineering work involves moving data from one storage location to another in order to support different access and query patterns. Singlestore aims to cut down on the number of database engines that you need to run so that you can reduce the amount of copying that is required. By supporting fast, in-memory row-based queries and columnar on-disk representation, it lets your transactional and analytical workloads run in the same database. In this episode SVP of engineering Shireesh Thota describes the impact on your overall system architecture that Singlestore can have and the benefits of using a cloud-native database engine for your next application.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription So now your modern data stack is set up. How is everyone going to find the data they need, and understand it? Select Star is a data discovery platform that automatically analyzes & documents your data. For every table in Select Star, you can find out where the data originated, which dashboards are built on top of it, who’s using it in the company, and how they’re using it, all the way down to the SQL queries. Best of all, it’s simple to set up, and easy for both engineering and operations teams to use. With Select Star’s data catalog, a single source of truth for your data is built in minutes, even across thousands of datasets. Try it out for free and double the length of your free trial today at dataengineeringpodcast.com/selectstar. You’ll also get a swag package when you continue on a paid plan. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you becom

Summary The latest generation of data warehouse platforms have brought unprecedented operational simplicity and effectively infinite scale. Along with those benefits, they have also introduced a new consumption model that can lead to incredibly expensive bills at the end of the month. In order to ensure that you can explore and analyze your data without spending money on inefficient queries Mingsheng Hong and Zheng Shao created Bluesky Data. In this episode they explain how their platform optimizes your Snowflake warehouses to reduce cost, as well as identifying improvements that you can make in your queries to reduce their contribution to your bill.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog Your host is Tobias Macey and today I’m interviewing Mingsheng Hong and Zheng Shao about Bluesky Data where they are combining domain expertise and machine learning to optimize your cloud warehouse usage and reduce operational costs

Interview

Introduction How did you get involved in the area of data management? Can you describe what Bluesky is and the story behind it?

What are the platforms/technologies that you are focused on in your current early stage? What are some of the other targets that you are considering once you validate your initial hypothesis?

Cloud cost optimization is an active area for application infrastructures as well. What are the corollaries and differences between compute and storage optimization strategies and what you are doing at Bluesky? How have your experiences at hyperscale companies using various combinations of cloud and on-premise data platforms informed your approach to the cost management probl

R in Action, Third Edition

R is the most powerful tool you can use for statistical analysis. This definitive guide smooths R’s steep learning curve with practical solutions and real-world applications for commercial environments. In R in Action, Third Edition you will learn how to: Set up and install R and RStudio Clean, manage, and analyze data with R Use the ggplot2 package for graphs and visualizations Solve data management problems using R functions Fit and interpret regression models Test hypotheses and estimate confidence Simplify complex multivariate data with principal components and exploratory factor analysis Make predictions using time series forecasting Create dynamic reports and stunning visualizations Techniques for debugging programs and creating packages R in Action, Third Edition makes learning R quick and easy. That’s why thousands of data scientists have chosen this guide to help them master the powerful language. Far from being a dry academic tome, every example you’ll encounter in this book is relevant to scientific and business developers, and helps you solve common data challenges. R expert Rob Kabacoff takes you on a crash course in statistics, from dealing with messy and incomplete data to creating stunning visualizations. This revised and expanded third edition contains fresh coverage of the new tidyverse approach to data analysis and R’s state-of-the-art graphing capabilities with the ggplot2 package. About the Technology Used daily by data scientists, researchers, and quants of all types, R is the gold standard for statistical data analysis. This free and open source language includes packages for everything from advanced data visualization to deep learning. Instantly comfortable for mathematically minded users, R easily handles practical problems without forcing you to think like a software engineer. About the Book R in Action, Third Edition teaches you how to do statistical analysis and data visualization using R and its popular tidyverse packages. In it, you’ll investigate real-world data challenges, including forecasting, data mining, and dynamic report writing. This revised third edition adds new coverage for graphing with ggplot2, along with examples for machine learning topics like clustering, classification, and time series analysis. What's Inside Clean, manage, and analyze data Use the ggplot2 package for graphs and visualizations Techniques for debugging programs and creating packages A complete learning resource for R and tidyverse About the Reader Requires basic math and statistics. No prior experience with R needed. About the Author Dr. Robert I Kabacoff is a professor of quantitative analytics at Wesleyan University and a seasoned data scientist with more than 20 years of experience. Quotes Kabacoff has outdone himself by significantly improving on the already excellent previous edition. - Alain Lompo, ISO-Gruppe R in Action has been my go-to reference on R for years. The third edition contains timely updates on the tidyverse and other new tools. I would recommend this book without hesitation. - Daniel Kenney-Jung MD, Department of Pediatrics, Duke University Outstandingly well-written. The best book on R programming that I have ever read. - Kelvin Meeks, International Technology Ventures Takes the reader through a series of essential methods from basic to complex. The only R book you will ever need. - Martin Perry, Microsoft

podcast_episode
by Leroy Terrelonge (Moody's Investors Service) , Lesley Ritter (Moody's Investors Service) , Cris deRitis , Mark Zandi (Moody's Analytics) , Jim Hempstead (Moody's Investors Service) , Ryan Sweet

Jim Hempstead, Lesley Ritter, and Leroy Terrelonge from Moody's Investors Service, Join the podcast to discuss the rising concern of cyber risks and attacks. Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Elasticsearch 8.x Cookbook - Fifth Edition

"Elasticsearch 8.x Cookbook" is your go-to resource for harnessing the full potential of Elasticsearch 8. This book provides over 180 hands-on recipes to help you efficiently implement, customize, and scale Elasticsearch solutions in your enterprise. Whether you're handling complex queries, analytics, or cluster management, you'll find practical insights to enhance your capabilities. What this Book will help me do Understand the advanced features of Elasticsearch 8.x, including X-Pack, for improving functionality and security. Master advanced indexing and query techniques to perform efficient and scalable data operations. Implement and manage Elasticsearch clusters effectively including monitoring performance via Kibana. Integrate Elasticsearch seamlessly into Java, Scala, Python, and big data environments. Develop custom plugins and extend Elasticsearch to meet unique project requirements. Author(s) Alberto Paro is a seasoned Elasticsearch expert with years of experience in search technologies and enterprise solution development. As a professional developer and consultant, he has worked with numerous organizations to implement Elasticsearch at scale. Alberto brings his deep technical knowledge and hands-on approach to this book, ensuring readers gain practical insights and skills. Who is it for? This book is perfect for software engineers, data professionals, and developers working with Elasticsearch in enterprise environments. If you're seeking to advance your Elasticsearch knowledge, enhance your query-writing abilities, or seek to integrate it into big data workflows, this book will be invaluable. Regardless of whether you're deploying Elasticsearch in e-commerce, applications, or for analytics, you'll find the content purposeful and engaging.

We talked about: 

Gloria’s background Working with MATLAB, R, C, Python, and SQL Working at ICE Job hunting after the bootcamp Data engineering vs Data science Using Docker Keeping track of job applications, employers and questions Challenges during the job search and transition Concerns over data privacy Challenges with salary negotiation The importance of career coaching and support Skills learned at Spiced Retrospective on Gloria’s transition to data and advice Top skills that helped Gloria get the job Thoughts on cloud platforms Thoughts on bootcamps and courses Spiced graduation project Standing out in a sea of applicants The cohorts at Spiced Conclusion

Links:

LinkedIn: https://www.linkedin.com/in/gloria-quiceno/ Github: https://github.com/gdq12

MLOps Zoomcamp: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Mark, Ryan, and Cris dive deep into the history, the causes, and the main indicators of recessions. Full transcript here Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Justin Borgman is the co-founder, Chairman and CEO of Starburst, and has almost a decade spent in senior executive roles building new businesses in the data warehousing and analytics space.  In this conversation with Tristan and Julia, Justin dives into the nuts and bolts of Trino, the open source distributed query engine, and explores how teams are adopting a data mesh architecture without making a mess.  For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

There is so much to learn! If you’re anything like me, you’re overwhelmed by the number of books, articles, podcasts, online and offline courses, webinars and other training opportunities out there. Today, we’re not short of learning materials, but often lack the time and capacity to learn new things. But what if there’s a better way to learn? Enter the concept of “Ultralearning”, coined by best-selling author Scott Young. A few years ago, I read Scott’s book Ultralearning and it changed my life. Not only did Scott’s approach to learning increase my learning rate significantly, it also made the process a lot more enjoyable overall!  Scott is an impressive Ultralearner who has used his advanced learning strategies to complete a 4-year computer science degree in 12 months, learn languages such as Spanish, Chinese, Korean and Macedonian and become a decent portrait artist. And then he’s written a book about it. In this episode of Leaders of Analytics, you will learn: How Scott has used his learning principles to master very complex and diverse skills in a very short timeHow we learn and retain informationHow we can structure our learning to faster absorption and better retentionHow Scott designs a learning strategy from scratchWhether Malcolm Gladwell’s “10,000 hour rule” is true or BSStrategies for learning hard and soft skills, and much more.Scott's website (full of excellent learning resources): https://www.scotthyoung.com/ Scott's podcast: https://www.scotthyoung.com/blog/podcast/ Scott on Twitter: https://twitter.com/scotthyoung/ Scott on LinkedIn: https://www.linkedin.com/in/scott-h-young-867ab21/

Summary Industrial applications are one of the primary adopters of Internet of Things (IoT) technologies, with business critical operations being informed by data collected across a fleet of sensors. Vopak is a business that manages storage and distribution of a variety of liquids that are critical to the modern world, and they have recently launched a new platform to gain more utility from their industrial sensors. In this episode Mário Pereira shares the system design that he and his team have developed for collecting and managing the collection and analysis of sensor data, and how they have split the data processing and business logic responsibilities between physical terminals and edge locations, and centralized storage and compute.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription So now your modern data stack is set up. How is everyone going to find the data they need, and understand it? Select Star is a data discovery platform that automatically analyzes & documents your data. For every table in Select Star, you can find out where the data originated, which dashboards are built on top of it, who’s using it in the company, and how they’re using it, all the way down to the SQL queries. Best of all, it’s simple to set up, and easy for both engineering and operations teams to use. With Select Star’s data catalog, a single source of truth for your data is built in minutes, even across thousands of datasets. Try it out for free and double the length of your free trial today at dataengineeringpodcast.com/selectstar. You’ll also get a swag package when you continue on a paid plan. Your host is Tobias Macey and today I’m interviewing Mário Pereira about building a data management system for globally distributed IoT sensors at Vopak

Interview

Introduction How did you get involved in the area of data management? Can you describe what Vopak is and what kinds of information you rely on to power the business? What kinds of sensors and edge devices are you using?

What kinds of consistency or variance do you have between sensors across your locations?

How much computing power and storage space do you place at the edge?

What level of pre-processing/filtering is being done at the edge and how do you decide what information needs to be centralized? What are some examples of decision-making that happens at the edge?

Can you describe the platform architecture that you have built for collecting and processing sensor data?

What was your process for selecting and evaluating the various components?

How much tolerance do you have for missed messages/dropped data? How long are your data retention period

Mark, Ryan, and Cris discuss the latest data on U.S. consumer prices. The big topic is monetary policy and what the Fed should do and whether the economy is more or less sensitive to changes in interest rates. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In this episode, Bryce and Conor continue their conversation with Ben Deane about C++ Algorithms! Twitter ADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the Guest: For Ben Deane, C++ wasn’t even among the first 10 languages that he learned on his programming journey, but it’s been the one that has paid the bills for the last 20-odd years. He spent most of that time in the games industry; many of the games he worked on used to be fondly remembered but now he’s accepted that they are probably mostly forgotten. These days he works in the finance industry writing high-frequency trading platforms in the most modern C++ that compilers can support. In his spare time he watches a lot of YouTube’s educational sector, practices the Japanese art of tsundoku, reads about the history of programming, avoids doing DIY, and surprises his wife by waking in the middle of the night yelling, “of course, it’s a monad!” before going back to sleep and dreaming of algorithms. Show Notes Date Recorded: 2022-04-19 Date Released: 2022-05-13 ADSP Episode 72: C++ Algorithm Family Feud!ADSP Episode 75: C++ Algorithms with Ben Deane (Part 1)ADSP Episode 76: C++ Algorithms with Ben Deane (Part 2)quick-bench.comTyler Weaver TweetC++ std::sortC++ std::nth_elementC++ std::max_elementC++ std::reduceC++ std::transform reduceC++ std::accumulateC++ std::shuffleC++ std::random_shuffleC++ std::iotaC++ std::partitionHyperLogLog AlgorithmCppCon 2017: Nicholas Ormrod “Fantastic Algorithms and Where To Find Them”Algebird“Add ALL the things: abstract algebra meets analytics” by Avi Bryant (2013)Intro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Free Download / Stream: http://bit.ly/l-miss-you Music promoted by Audio Library https://youtu.be/iYYxnasvfx8

Send us a text Datatopics is hosted by Kevin Missoorten and typically joined by multiple guests. In a side step from our regular Tour de Tools series, we talk about the fuzzy and misunderstood concepts in the world of data, analytics and AI. In miniserie format, we discuss the different angles of these fuzzy datatopics to get to the bottom of things. In this second episode we explore the architectural implications of going "Data Mesh".

Music: The Gentlemen - DivKid

An estimated 80 to 90 percent of the data in an enterprise is text. Sadly, this rich information is mostly neglected for analytical purposes. Textual data is typically full of information, but also very complex to interpret computationally and statistically. Why? Because textual data is both content and context. The same words and sentences can have very different meanings depending on the context. Textual data is truly a goldmine, but how can we mine it without being digital superpowers like Google, Microsoft or Facebook? To answer this question and many more relating to interpretation of textual data, I recently spoke to Bill Inmon. Bill is the Founder, Chairman and CEO of Forest Rim Technology and author of more than 60 books on data warehousing. He is often described as the Father of Data Warehousing due to his pioneering efforts in making data and data technologies available to organisations across all industries and sizes. In this episode of Leaders of Analytics, we discuss: How Bill became the Father of Data WarehousingThe history of data warehousing and the most exciting developments in this space todayThe typical challenges holding us back from extracting value from textual dataThe concept of the “Textual ETL” and it’s benefits over other text data storage and analytics approachesWhy NLP is not the best approach for textual data analyticsThe biggest opportunities for textual analytics today and in the future, and much more.Connect with Bill: Forest Rim Technnology: https://www.forestrimtech.com/ Bill on LinkedIn: https://www.linkedin.com/in/billinmon/

We in the West have watched Russia's invasion of Ukraine with disbelief and horror. How could this happen to a European country in the 21st century? Is there any justifiable rationale for the wanton destruction of people and property there? As we ponder these questions, our data colleagues in Ukraine have experienced the war firsthand.

To help us get a handle on Ukraine's role in the data economy and how teams based there are coping with Russia's military onslaught, Wayne interviews two software executives today who share how the war has affected their companies and how they are adapting to the evolving situation.

Dragos Georgescu is vice president and chief technology officer of DataClarity, an innovative data analytics vendor with a development shop in Lviv, Ukraine.

Bogdan Steblyanko is CEO of CHI Software, a software development company based in Ukraine with more than 500 employees spread across four development centers, including hard-hit Kharkiv in the east, which is the company's headquarters.

Summary Many of the events, ideas, and objects that we try to represent through data have a high degree of connectivity in the real world. These connections are best represented and analyzed as graphs to provide efficient and accurate analysis of their relationships. TigerGraph is a leading database that offers a highly scalable and performant native graph engine for powering graph analytics and machine learning. In this episode Jon Herke shares how TigerGraph customers are taking advantage of those capabilities to achieve meaningful discoveries in their fields, the utilities that it provides for modeling and managing your connected data, and some of his own experiences working with the platform before joining the company.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit http://www.dataengineeringpodcast.com/montecarlo?utm_source=rss&utm_medium=rss to learn more. Your host is Tobias Macey and today I’m interviewing Jon Herke about TigerGraph, a distributed native graph database

Interview

Introduction How did you get involved in the area of data management? Can you describe what TigerGraph is and the story behind it? What are some of the core use cases that you are focused on supporting? How has TigerGraph changed over the past 4 years since I spoke with Todd Blaschka at the Open Data Science Conference? How has the ecosystem of graph databases changed in usage and design in recent years? What are some of the persi

podcast_episode
by Dante DeAntonio (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics) , Ryan Sweet

Mark, Ryan, and Cris welcome back two colleagues and regulars on the podcast, Marisa DiNatale and Dante DeAntonio of Moody's Analytics, to discuss the April U.S. employment report. Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hoje o seu podcast de dados favorito traz como é trabalhar com Data Science e Advanced Analytics na Bain & Company, uma das maiores e mais conceituadas consultorias do mundo! Para esse papo, trouxemos a Marianne Rodríguez (Senior Impact Lead), Danilo Carvalho (Data Science Manager), Martín Villanueva (Data Science Manager), Felipe Fiamozzini (Expert Associate Partner).

Nesse episódio eles falam sobre a jornada de quatro dias de trabalho na Bain, como equipes em diferentes países trabalham juntas, os projetos muito legais em que eles trabalham, e muito mais. Tá imperdível!

Nossos convidados Marianne Rodríguez Danilo Carvalho Martín Villanueva Felipe Fiamozzini

Acesse o link do post para ter acesso as referências e redes sociais dos convidados https://medium.com/data-hackers/advanced-analytics-na-bain-company-data-hackers-podcast-55-c23ece5cdba8

Send us a text Datatopics is hosted by Kevin Missoorten and typically joined by multiple guests. In a side step from our regular Tour de Tools series, we talk about the fuzzy and misunderstood concepts in the world of data, analytics and AI. In miniserie format, we discuss the different angles of these fuzzy datatopics to get to the bottom of things. In this first episode we do a high level exploration of the topic discussed at every data conference this year: “Data Mesh”. Datatopics 

Music: The Gentlemen - DivKid

Amit Prakash is Co-founder and CTO at ThoughtSpot. He has a deep background in search, having previously led the AdSense engineering team at Google and served on the early Bing team at Microsoft. In this conversation with Tristan and Julia, Amit gets real about the promise of AI in data: which applications are being widely used today, and which are still a few years out? For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.