talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Kevin Hassett, Vice President and Managing Director of the Lindsey Group and former Chair of the Council of Economics under President Trump, joins Mark, Ryan, and Cris to discuss the risk of a recession, inflation, and tax proposals in President Biden's Build Back Better legislation.  Full episode transcript.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Summary The perennial question of data warehousing is how to model the information that you are storing. This has given rise to methods as varied as star and snowflake schemas, data vault modeling, and wide tables. The challenge with many of those approaches is that they are optimized for answering known questions but brittle and cumbersome when exploring unknowns. In this episode Ahmed Elsamadisi shares his journey to find a more flexible and universal data model in the form of the "activity schema" that is powering the Narrator platform, and how it has allowed his customers to perform self-service exploration of their business domains without being blocked by schema evolution in the data warehouse. This is a fascinating exploration of what can be done when you challenge your assumptions about what is possible.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Your host is Tobias Macey and today I’m interviewing Ahmed Elsamadisi about Narrator, a platform to enable anyone to go from question to data-driven decision in minutes

Interview

Introduction How did you get involved in the area of data management? Can you describe what Narrator is and the story behind it? What are the challenges that you have seen organizations encounter when attempting to make analytics a self-serve capability? What are the use cases that you are focused on? How does Narrator fit within the data workflows of an organization? How is the Narrator platform implemented?

How has the design and focus of the technology evolved since you first started working on Narrator?

The core element of the analyses that you are building is the "activity schema". Can you describe the design process that led you to that format?

What are the challenges that are posed by more widely used modeling techniques such as star/s

I'm tired of data elitism. I want to be inclusive. 

If you are in the data space, join me in being inclusive. 

Join me on the journey at DataCareerJumpstart.com

Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Essential PySpark for Scalable Data Analytics

Dive into the world of scalable data processing with 'Essential PySpark for Scalable Data Analytics'. This book is a comprehensive guide that helps beginners understand and utilize PySpark to process, analyze, and draw insights from large datasets effectively. With hands-on tutorials and clear explanations, you will gain the confidence to tackle big data analytics challenges. What this Book will help me do Understand and apply the distributed computing paradigm for big data. Learn to perform scalable data ingestion, cleansing, and preparation using PySpark. Create and utilize data lakes and the Lakehouse paradigm for efficient data storage and access. Develop and deploy machine learning models with scalability in mind. Master real-time analytics pipelines and create impactful data visualizations. Author(s) None Nudurupati is an experienced data engineer and educator, specializing in distributed systems and big data technologies. With years of practical experience in the field, None brings a clear and approachable teaching style to technical topics. Passionate about empowering readers, the author has designed this book to be both practical and inspirational for aspiring data practitioners. Who is it for? This book is ideal for data professionals including data scientists, engineers, and analysts looking to scale their data analytics processes. It assumes familiarity with basic data science concepts and Python, as well as some experience with SQL-like data analysis. This is particularly suitable for individuals aiming to expand their knowledge in distributed computing and PySpark to handle big data challenges. Achieving scalable and efficient data solutions is at the core of this guide.

If you dream of using analytics to optimise your customer interactions and squeeze additional value out of your existing operations, then is episode is for you! Today, most large services businesses have established data science functions that churn out countless reports, dashboards, customer insights packs, machine learning models, forecasts and predictions. With all this information to hand, you would hope that front-line operations are making data-driven decisions across the board. But alas, many of these same businesses struggle to turn their analytics into more than glossy PowerPoint packs that describe what could be done. Often, this is because the technical implementation of data science solutions run into resource constraints or remain unsupported by IT departments. So, how can we successfully make use of our analytical output in our front-line operations without spending eons creating overly complex systems that never quite deliver? To answer this question, I recently spoke to Jason Tan who is an expert in operationalising data science solutions that deliver positive customer outcomes and real financial results. Jason Is the managing director of consulting group Data Driven Analytics and an expert in optimising customer experience, pricing and long-term customer value.         In this episode of Leaders of Analytics, we discuss: How to use analytics to optimise your customer interactionsHow to identify the most valuable data science use cases in your organisationHow Jason has created successful data science solutions around legacy IT platformsWhether you should buy off-the-shelf pricing software or build your own solution

Automating Analytics

Do you have a method for seeing all the data that passes through your organization? The need to democratize access to data and analytics, automate complex and tedious business processes, and amplify human output has led to analytic process automation (APA). Thousands of organizations across nearly every business and industry vertical use this software to accelerate data-driven business outcomes. This report examines the power of APA using technology, business, and real-world examples. If you're a technical business, analytics, or business intelligence leader, you'll learn how to use APA to tackle complex problems, increase productivity, and improve efficiency. You'll discover what APA means for your business and for you. This report explores: The importance of data: understand how data is transformed into information and insights for making business decisions Gathering data with APA: learn how APA differs from your current process Data democratization: grant data access to employees and empower them to analyze specific tasks and performance Data reporting: learn how APA blends data tables, fields, and values to help you search for insights at a granular level Analytics: explore new tools that use AI and ML to improve the analytic process

Modern Analytics Platforms

From a global pandemic to extreme weather, the events of 2020 and 2021 have caused organizations to make quick and constant adjustments to their strategy and operations. This transformation is likely to continue and have a major impact on analytics. Not only do responders to Experian's annual Global Data Management survey confirm more demand for data insights, but most of them also believe the lack of agility hurt their organization's responses to fast-changing business needs. With this O'Reilly report, you'll learn how organizations have begun to take new approaches to analytics for business reinvention and digital transformation. Chief analytics and data officers and data analytics, data science, data visualization leaders will explore converged analytics and find out how it differs from legacy and current analytics approaches. You'll see where your organization stands in its journey to convergence--and what you need to do next. This report helps you: Examine how three organizations in different industries and with different objectives have benefited from modern analytics Learn how analytics has evolved to support greater business agility at scale Examine the alignment of people, processes, tools, and data in converged analytics Learn the five stages of analytical competition and six dimensions for benchmarking maturity Explore practices that you can adopt to improve your analytics capabilities and your agility

Summary The market for business intelligence has been going through an evolutionary shift in recent years. One of the driving forces for that change has been the rise of analytics engineering powered by dbt. Lightdash has fully embraced that shift by building an entire open source business intelligence framework that is powered by dbt models. In this episode Oliver Laslett describes why dashboards aren’t sufficient for business analytics, how Lightdash promotes the work that you are already doing in your data warehouse modeling with dbt, and how they are focusing on bridging the divide between data teams and business teams and the requirements that they have for data workflows.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Are you bored with writing scripts to move data into SaaS tools like Salesforce, Marketo, or Facebook Ads? Hightouch is the easiest way to sync data into the platforms that your business teams rely on. The data you’re looking for is already in your data warehouse and BI tools. Connect your warehouse to Hightouch, paste a SQL query, and use their visual mapper to specify how data should appear in your SaaS systems. No more scripts, just SQL. Supercharge your business teams with customer data using Hightouch for Reverse ETL today. Get started for free at dataengineeringpodcast.com/hightouch. Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Your host is Tobias Macey and today I’m interviewing Oliver Laslett about Lightdash, an open source business intelligence system powered by your dbt models

Interview

Introduction How did you get involved in the area of data management? Can you describe what Lightdash is and the story behind it?

What are the main goals of the project? Who are the target users, and how has that profile informed your feature priorities?

Business intelligence is a market that has gone through several generational shifts, with products targeting numerous personas and purposes. What are the capabilities that make Lightdash stand out from the other options? Can you describe how Lightdash is architected?

How have the design and goals of the system changed or evolved since you first began working on it? What have been the most challenging engineering problems that you have dealt with?

How does the approach that you are taking with Lightdash compare to systems such as Transform and Metriql that aim to provide a dedicated metrics layer? Can you describe the workflow for som

Mark, Ryan, and Cris welcome Gaurav Ganguly, a Senior Director of Economic Research at Moody's Analytics and Chris Lafakis, a Director of Economic Research. They discuss global energy markets, the effect on the economy and where energy prices are headed. Full episode transcript found here. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Beginning Apache Spark 3: With DataFrame, Spark SQL, Structured Streaming, and Spark Machine Learning Library

Take a journey toward discovering, learning, and using Apache Spark 3.0. In this book, you will gain expertise on the powerful and efficient distributed data processing engine inside of Apache Spark; its user-friendly, comprehensive, and flexible programming model for processing data in batch and streaming; and the scalable machine learning algorithms and practical utilities to build machine learning applications. Beginning Apache Spark 3 begins by explaining different ways of interacting with Apache Spark, such as Spark Concepts and Architecture, and Spark Unified Stack. Next, it offers an overview of Spark SQL before moving on to its advanced features. It covers tips and techniques for dealing with performance issues, followed by an overview of the structured streaming processing engine. It concludes with a demonstration of how to develop machine learning applications using Spark MLlib and how to manage the machine learning development lifecycle. This book is packed with practical examples and code snippets to help you master concepts and features immediately after they are covered in each section. After reading this book, you will have the knowledge required to build your own big data pipelines, applications, and machine learning applications. What You Will Learn Master the Spark unified data analytics engine and its various components Work in tandem to provide a scalable, fault tolerant and performant data processing engine Leverage the user-friendly and flexible programming model to perform simple to complex data analytics using dataframe and Spark SQL Develop machine learning applications using Spark MLlib Manage the machine learning development lifecycle using MLflow Who This Book Is For Data scientists, data engineers and software developers.

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

Data Engineering with Apache Spark, Delta Lake, and Lakehouse is a comprehensive guide packed with practical knowledge for building robust and scalable data pipelines. Throughout this book, you will explore the core concepts and applications of Apache Spark and Delta Lake, and learn how to design and implement efficient data engineering workflows using real-world examples. What this Book will help me do Master the core concepts and components of Apache Spark and Delta Lake. Create scalable and secure data pipelines for efficient data processing. Learn best practices and patterns for building enterprise-grade data lakes. Discover how to operationalize data models into production-ready pipelines. Gain insights into deploying and monitoring data pipelines effectively. Author(s) None Kukreja is a seasoned data engineer with over a decade of experience working with big data platforms. He specializes in implementing efficient and scalable data solutions to meet the demands of modern analytics and data science. Writing with clarity and a practical approach, he aims to provide actionable insights that professionals can apply to their projects. Who is it for? This book is tailored for aspiring data engineers and data analysts who wish to delve deeper into building scalable data platforms. It is suitable for those with basic knowledge of Python, Spark, and SQL, and seeking to learn Delta Lake and advanced data engineering concepts. Readers should be eager to develop practical skills for tackling real-world data engineering challenges.

Benn is Chief Analytics Officer and a Co-founder at Mode Analytics, but you may know him from his Substack newsletter (benn.substack.com), where each Friday he dives into a semi-controversial topic (recent examples: "Is BI Dead?" and "BI is Dead").  In this episode, Benn, Tristan & Julia finally hash out some of these debates IRL: what is the modern data stack, why is the metrics layer important, and what's the point of all of this? For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

Summary The focus of the past few years has been to consolidate all of the organization’s data into a cloud data warehouse. As a result there have been a number of trends in data that take advantage of the warehouse as a single focal point. Among those trends is the advent of operational analytics, which completes the cycle of data from collection, through analysis, to driving further action. In this episode Boris Jabes, CEO of Census, explains how the work of synchronizing cleaned and consolidated data about your customers back into the systems that you use to interact with those customers allows for a powerful feedback loop that has been missing in data systems until now. He also discusses how Census makes that synchronization easy to manage, how it fits with the growth of data quality tooling, and how you can start using it today.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the world’s first end-to-end, fully automated Data Observability Platform! In the same way that application performance monitoring ensures reliable software and keeps application downtime at bay, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence, reducing time to detection and resolution from weeks or days to just minutes. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/impact today to save your spot at IMPACT: The Data Observability Summit a half-day virtual event featuring the first U.S. Chief Data Scientist, founder of the Data Mesh, Creator of Apache Airflow, and more data pioneers spearheading some of the biggest movements in data. The first 50 to RSVP with this link will be entered to win an Oculus Quest 2 — Advanced All-In-One Virtual Reality Headset. RSVP today – you don’t want to miss it! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Boris Jabes about Census and the growing category of operational analytics

Interview

Introduction How did you get involved in the area of data management? Can you describe what Census is and the story behind it? The terms "reverse ETL" and "operational analytics" have started being used for similar, and often interchangeable, purposes. What are your thoughts on the semantic and concrete differences between these phrases? What are the motivating factors for adding operational analytics or "data activation" to a

Joe Kennedy, senior principal economist at MITRE, joins Mark, Ryan, and Cris to discuss the different schools of thought on anti-trust and market competition. They also discuss Big Tech and Big Pharma.  View episode transcript here. Recommended Reads Ending Poverty: Changing Behavior, Guaranteeing Income, and Transforming Government, by Joseph Kennedy, https://www.amazon.com/Ending-Poverty-Guaranteeing-Transforming-Government/dp/074255872X.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

We talked about:

Rishabh's background Rishabh’s experience  as a sales engineer Prescriptive analytics vs predictive analytics The problem with the term ‘data science’ Is machine learning a part of analytics? Day-to-day of people that work with ML Rule-based systems to machine learning The role of analysts in rule-based systems and in data teams Do data analysts know data better than data scientists? Data analysts’ documentation and recommendations Iterative work - data scientists/ML vs data analysts Analyzing results of experiments Overlaps between machine learning and analytics Using tools to bridge the gap between ML and analytics Do companies overinvest in ML and underinvest in analystics? Do companies hire data scientists while forgetting to hire data analysts? The difficulty of finding senior data analysts Is data science sexier than data analytics? Should ML and data analytics teams work together or independently? Building data teams Rishabh’s newsletter – MLOpsRoundup

Links:

https://mlopsroundup.substack.com/ https://twitter.com/rish_bhargava

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Power Query Cookbook

The "Power Query Cookbook" is your comprehensive guide to mastering data preparation and transformation using Power Query. With this book, you'll learn to connect to data sources, reshape data to fit business requirements, and use both no-code transformations and custom M code solutions to unlock the full potential of your data. Step-by-step examples will guide you through optimizing dataflows in Power BI. What this Book will help me do Master connecting to various data sources and performing intuitive transformations using Power Query. Learn to reshape and enrich data to meet complex business requirements efficiently. Explore advanced capabilities of Power Query, including M code and online dataflows. Develop custom data transformations with a blend of GUI-based and M code techniques. Optimize the performance of Power BI Dataflows using best practices and diagnostics tools. Author(s) None Janicijevic is a seasoned expert in data analytics, specializing in Microsoft Power BI and Power Query. With years of experience in data engineering and a passion for teaching, None brings a clear, actionable, and results-driven approach to demystifying complex technical concepts. Their work empowers professionals with the tools they need to excel in data-driven decision-making. Who is it for? This book is designed for data analysts, business intelligence developers, and data engineers aiming to enhance their skills in data preparation using Power Query. If you have a basic understanding of Power BI and want to delve into integrating and optimizing data from multiple sources, this book is for you. It's ideal for professionals seeking practical insights and techniques to improve data transformations. Novices with some exposure to BI tools will also find the material accessible and rewarding.

Storage Systems

Storage Systems: Organization, Performance, Coding, Reliability and Their Data Processing was motivated by the 1988 Redundant Array of Inexpensive/Independent Disks proposal to replace large form factor mainframe disks with an array of commodity disks. Disk loads are balanced by striping data into strips—with one strip per disk— and storage reliability is enhanced via replication or erasure coding, which at best dedicates k strips per stripe to tolerate k disk failures. Flash memories have resulted in a paradigm shift with Solid State Drives (SSDs) replacing Hard Disk Drives (HDDs) for high performance applications. RAID and Flash have resulted in the emergence of new storage companies, namely EMC, NetApp, SanDisk, and Purestorage, and a multibillion-dollar storage market. Key new conferences and publications are reviewed in this book.The goal of the book is to expose students, researchers, and IT professionals to the more important developments in storage systems, while covering the evolution of storage technologies, traditional and novel databases, and novel sources of data. We describe several prototypes: FAWN at CMU, RAMCloud at Stanford, and Lightstore at MIT; Oracle's Exadata, AWS' Aurora, Alibaba's PolarDB, Fungible Data Center; and author's paper designs for cloud storage, namely heterogeneous disk arrays and hierarchical RAID. Surveys storage technologies and lists sources of data: measurements, text, audio, images, and video Familiarizes with paradigms to improve performance: caching, prefetching, log-structured file systems, and merge-trees (LSMs) Describes RAID organizations and analyzes their performance and reliability Conserves storage via data compression, deduplication, compaction, and secures data via encryption Specifies implications of storage technologies on performance and power consumption Exemplifies database parallelism for big data, analytics, deep learning via multicore CPUs, GPUs, FPGAs, and ASICs, e.g., Google's Tensor Processing Units

The U.S. bond market is showing angst about the debt ceiling. While the debt ceiling will likely be raised, there is a history of waiting until the last minute. Ryan Sweet provides actionable insights to help you better manage your credit portfolio during this uncertain time. Episode's slides can be found here.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

AI and machine learning are seen by many as capabilities with enormous potential for unlocking digital personalisation and customer empathy at scale. Organisations that get this right are disrupting industries and leaving old-school competitors broke. Just think of what global businesses like Netflix, Amazon and Facebook have been able to achieve with data-driven personalisation. Yet, for many organisations, the promise of AI seems elusive or at least very hard to achieve. Many businesses are not realising the full potential of their stores of data, simply because they don’t know how. To help us understand the potential of AI and ML for Customer Experience Management, I recently spoke to my friend and co-author of Demystifying AI for the Enterprise, Dr Kirk Borne. Kirk is a truly unique individual who combines his incredible intelligence with a real passion for his chosen vocation. Having graduated with a PhD in Astrophysics, he spent 20 years working at NASA, before moving into the academic and corporate worlds. He spent 12 years as Professor of Astrophysics and Computational Science, where he created the world’s first data science undergraduate degree. He since moved into data science consulting where he has been an executive for the past 6 years. Kirk has a social media following of well over 300,000 which is a testament to the huge amount of value he creates through content creation and knowledge sharing. In this episode of Leaders of Analytics, we discuss: What data science, AI and machine learning can bring to digital and analogue customer experiencesThe most valuable applications of AI for customer experience managementHow AI can be used to amplify the abilities of front-line staffLeading applications of AI-driven customer experienceThe technical and organisational challenges that must be overcome to move up the analytics maturity curveThe importance of ModelOps in operationalising data science

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet , Betsey Stevenson (University of Michigan's Ford School)

Betsey Stevenson, Professor of public policy and economics at the University of Michigan's Ford School, joins Mark, Ryan, and Cris to dissect the September employment report, the future of working from home, and Biden's economic agenda. Also, Mark has a podcast, a YouTube channel, and now a Twitter handle. Follow @markzandi. View full episode transcript here.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.