talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

CompTIA Data+ DA0-001 Exam Cram

CompTIA® Data+ DA0-001 Exam Cram is an all-inclusive study guide designed to help you pass the CompTIA Data+ DA0-001 exam. Prepare for test day success with complete coverage of exam objectives and topics, plus hundreds of realistic practice questions. Extensive prep tools include quizzes, Exam Alerts, and our essential last-minute review CramSheet. The powerful Pearson Test Prep practice software provides real-time assessment and feedback with two complete exams. Covers the critical information needed to score higher on your Data+ DA0-001 exam! Understand data concepts, environments, mining, analysis, visualization, governance, quality, and controls Work with databases, data warehouses, database schemas, dimensions, data types, structures, and file formats Acquire data and understand how it can be monetized Clean and profile data so it;s more accurate, consistent, and useful Review essential techniques for manipulating and querying data Explore essential tools and techniques of modern data analytics Understand both descriptive and inferential statistical methods Get started with data visualization, reporting, and dashboards Leverage charts, graphs, and reports for data-driven decision-making Learn important data governance concepts ...

Summary

The most interesting and challenging bugs always happen in production, but recreating them is a constant challenge due to differences in the data that you are working with. Building your own scripts to replicate data from production is time consuming and error-prone. Tonic is a platform designed to solve the problem of having reliable, production-like data available for developing and testing your software, analytics, and machine learning projects. In this episode Adam Kamor explores the factors that make this such a complex problem to solve, the approach that he and his team have taken to turn it into a reliable product, and how you can start using it to replace your own collection of scripts.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Truly leveraging and benefiting from streaming data is hard - the data stack is costly, difficult to use and still has limitations. Materialize breaks down those barriers with a true cloud-native streaming database - not simply a database that connects to streaming systems. With a PostgreSQL-compatible interface, you can now work with real-time data using ANSI SQL including the ability to perform multi-way complex joins, which support stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL. Go to dataengineeringpodcast.com/materialize today and sign up for early access to get started. If you like what you see and want to help make it better, they're hiring across all functions! Data and analytics leaders, 2023 is your year to sharpen your leadership skills, refine your strategies and lead with purpose. Join your peers at Gartner Data & Analytics Summit, March 20 – 22 in Orlando, FL for 3 days of expert guidance, peer networking and collaboration. Listeners can save $375 off standard rates with code GARTNERDA. Go to dataengineeringpodcast.com/gartnerda today to find out more. Your host is Tobias Macey and today I'm interviewing Adam Kamor about Tonic, a service for generating data sets that are safe for development, analytics, and machine learning

Interview

Introduction How did you get involved in the area of data management? Can you describe what Tonic is and the story behind it? What are the core problems that you are trying to solve? What are some of the ways that fake or obfuscated data is used in development and analytics workflows? challenges of reliably subsetting data

impact of ORMs and bad habits developers get into with database modeling

Can you describe how Tonic is implemented?

What are the units of composition that you are building to allow for evolution and expansion of your product? How have the design and goals of the platform evolved since you started working on it?

Can you describe some of the different workflows that customers build on top of your various tools What are the most interesting, innovative, or unexpected ways that you have seen Tonic used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Tonic? When is Tonic the wrong choice? What do you have planned for the future of Tonic?

Contact Info

LinkedIn @AdamKamor on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Tonic

Djinn

Django

podcast_episode
by Jason Furman (Harvard Kennedy School) , Cris deRitis , Mark Zandi (Moody's Analytics)

We welcome Jason Furman, Aetna Professor of the Practice of Economic Policy jointly at Harvard Kennedy School to discuss what could potentially be a catastrophic default of the national debt. We'll get into President Biden's economic policies, inflation, and recession over the next 12-18 months. Jason came to play with some humor, positivity, and a great stat. Full episode transcript To learn more about Moody's Analytics Summit 2023 & register, click here.  Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

We enter 2023 in a haze of uncertainty. Enterprises must rationalize analytics projects, shift to lower-risk use cases, and control cloud costs. They also must measure the ROI of analytics projects and use data governance to reduce business risk. Published at: https://www.eckerson.com/articles/analyzing-a-downturn-five-principles-for-data-analytics-in-2023

On today’s episode, we’re joined by Phyl Terry, Founder and CEO of Collaborative Gain, a community of smart, passionate leaders who help each other build better, more customer (and employee) centric companies.

We talk about:

  • Phyl’s story and the story behind Collaborative Gain.
  • Phyl’s book, "Never Search Alone", and the three big ideas inside it.
  • Is there a business model behind the community Phyl has built?
  • The dangers of people thinking they have unlimited time in their careers.
  • How some organizations are naturally better at attracting the right people.
  • The value of combining the right people with the right vision.

Phyl Terry - https://www.linkedin.com/in/phylterry/# Collaborative Gain - https://www.linkedin.com/company/collaborative-gain/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

Can cold messaging really land you your first data job? Even without experience? In this episode, Avery sits down with logistics Data Analyst Asa Howard to discuss how he landed his first data job with a simple cold messaging strategy.

🌟 Join the data project club!

“25OFF” to get 25% off (first 50 members).

📊 Come to my next free “How to Land Your First Data Job” training

🏫 Check out my 10-week data analytics bootcamp

Asa’s Links:

Connect on LinkedIn Waitlist for Google Sheets course

Timestamps:

(3:22) - Asa realizes he needs a career pivot

(5:21) - What a Solutions Engineer does (Logistics Analyst)

(10:21) - System he used to land his job

(14:24) - Cold message template you can steal

(18:01) - What tools he uses on day to day

(24:37) - Google Sheets vs Excel

Connect with Avery:

📺 Subscribe on YouTube: https://www.youtube.com/c/AverySmithDataCareerJumpstart/videos 🎙Listen to My Podcast: https://podcasts.apple.com/us/podcast/data-career-podcast/id1547386535 👔 Connect with me on LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://www.instagram.com/datacareerjumpstart/ 🎵 TikTok: https://www.tiktok.com/@verydata?

Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Summary

The modern data stack has made it more economical to use enterprise grade technologies to power analytics at organizations of every scale. Unfortunately it has also introduced new overhead to manage the full experience as a single workflow. At the Modern Data Company they created the DataOS platform as a means of driving your full analytics lifecycle through code, while providing automatic knowledge graphs and data discovery. In this episode Srujan Akula explains how the system is implemented and how you can start using it today with your existing data systems.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Truly leveraging and benefiting from streaming data is hard - the data stack is costly, difficult to use and still has limitations. Materialize breaks down those barriers with a true cloud-native streaming database - not simply a database that connects to streaming systems. With a PostgreSQL-compatible interface, you can now work with real-time data using ANSI SQL including the ability to perform multi-way complex joins, which support stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL. Go to dataengineeringpodcast.com/materialize today and sign up for early access to get started. If you like what you see and want to help make it better, they're hiring across all functions! Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/montecarlo to learn more. Data and analytics leaders, 2023 is your year to sharpen your leadership skills, refine your strategies and lead with purpose. Join your peers at Gartner Data & Analytics Summit, March 20 – 22 in Orlando, FL for 3 days of expert guidance, peer networking and collaboration. Listeners can save $375 off standard rates with code GARTNERDA. Go to dataengineeringpodcast.com/gartnerda today to find out more. Your host is Tobias Macey and today I'm interviewing Srujan Akula about DataOS, a pre-integrated and managed data platform built by The Modern Data Company

Interview

Introduction How did you get involved in the area of data management? Can you describe what your mission at The Modern Data Company is and the story behind it? Your flagship (only?) product is a platform that you're calling DataOS. What is the scope and goal of that platform?

Who is the target audience?

On your site you refer to the idea of "data as software". What are the principles and ways of thinking that are encompassed by that concept?

What are the platform capabilities that are required to make it possible?

There are 11 "Key Features" listed on your site for the DataOS. What was your process for identifying the "must have" vs "nice to have" features for launching the platform? Can you describe the technical architecture that powers your DataOS product?

What are the core principles that you are optimizing for in the design of your platform? How have the design and goals of the system changed or evolved since you started working on DataOS?

Can you describe the workflow for the different practitioners and stakeholders working on an installation of DataOS? What are the interfaces and escape hatches that are available for integrating with and ext

IBM Software Systems Integration: With IBM MQ Series for JMS, IBM FileNet Case Manager, and IBM Business Automation Workflow

Examine the working details for real-world Java programs used for system integration with IBM Software, applying various API libraries (as used by Banking and Insurance companies). This book includes the step-by-step procedure to use the IBM FileNet Case Manager 5.3.3 Case Builder solution and the similar IBM System, IBM Business Automation Workflow to create an Audit System. You'll learn how to implement the workflow with a client Java Message Service (JMS) java method developed with Workflow Custom Operations System Step components. Using IBM Cognos Analytics Version 11.2, you'll be able to create new views for IBM Case Manager Analytics for custom time dimensions. The book also explains the SQL code and procedures required to create example Online Analytical Processing (OLAP) cubes with multi-level time dimensions for IBM Case Manager analytics. IBM Software Systems Integration features the most up to date systems software procedures using tested API calls. What You Will Learn Review techniques for generating custom IBM JMS code Create a new custom view for a multi-level time dimension See how a java program can provide the IBM FileNet document management API calls for content store folder and document replication Configure Java components for content engine events Who This Book Is ForIT consultants, Systems and Solution Architects.

podcast_episode
by Cris deRitis , David Wessel (Brookings Institution) , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Mark, Cris, and Marisa are joined by David Wessel, senior fellow in economic studies at Brookings, to dissect the CPI report, and discuss Fed policy, prospects for recession, and the looming threat of a debt limit breach. Full episode transcript. Come join us at the Moody’s Analytics Summit, March 5th-7th at the Phoenician in Scottsdale Arizona. To learn more & register, click here:  Moody's Analytics Summit 2023.  Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

On today’s episode, we’re talking to Dylan Barrell, Chief Technology Officer at Deque Systems, Inc, a web accessibility software and services company aimed at giving everyone, regardless of ability, equal access to information, services and applications on the web.

We talk about:

  • Dylan’s background and what Deque does.
  • The importance of accessibility in software.
  • Dylan’s book, “Agile Accessibility Handbook,” and why he wrote it.
  • Are there any particular tools to identify accessibility issues in software?
  • Countries that are leading the way around SaaS accessibility.
  • Advice for smaller, newer SaaS companies to prioritize accessibility.
  • How tech trends like AI, the IoT and algorithms have impacted accessibility.

Dylan Barrell - https://www.linkedin.com/in/dylanbarrell/ Deque Systems - https://www.linkedin.com/company/deque-systems-inc/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

Today I’m chatting with Bruno Aziza, Head of Data & Analytics at Google Cloud. Bruno leads a team of outbound product managers in charge of BigQuery, Dataproc, Dataflow and Looker and we dive deep on what Bruno looks for in terms of skills for these leaders. Bruno describes the three patterns of operational alignment he’s observed in data product management, as well as why he feels ownership and customer obsession are two of the most important qualities a good product manager can have. Bruno and I also dive into how to effectively abstract the core problem you’re solving, as well as how to determine whether a problem might be solved in a better way. 

Highlights / Skip to:

Bruno introduces himself and explains how he created his “CarCast” podcast (00:45) Bruno describes his role at Google, the product managers he leads, and the specific Google Cloud products in his portfolio (02:36) What Bruno feels are the most important attributes to look for in a good data product manager (03:59) Bruno details how a good product manager focuses on not only the core problem, but how the problem is currently solved and whether or not that’s acceptable (07:20) What effective abstracting the problem looks like in Bruno’s view and why he positions product management as a way to help users move forward in their career (12:38) Why Bruno sees extracting value from data as the number one pain point for data teams and their respective companies (17:55) Bruno gives his definition of a data product (21:42) The three patterns Bruno has observed of operational alignment when it comes to data product management (27:57) Bruno explains the best practices he’s seen for cross-team goal setting and problem-framing (35:30)

Quotes from Today’s Episode  

“What’s happening in the industry is really interesting. For people that are running data teams today and listening to us, the makeup of their teams is starting to look more like what we do [in] product management.” — Bruno Aziza (04:29)

“The problem is the problem, so focus on the problem, decompose the problem, look at the frictions that are acceptable, look at the frictions that are not acceptable, and look at how by assembling a solution, you can make it most seamless for the individual to go out and get the job done.” – Bruno Aziza (11:28)

“As a product manager, yes, we’re in the business of software, but in fact, I think you’re in the career management business. Your job is to make sure that whatever your customer’s job is that you’re making it so much easier that they, in fact, get so much more done, and by doing so they will get promoted, get the next job.” – Bruno Aziza (15:41)

“I think that is the task of any technology company, of any product manager that’s helping these technology companies: don’t be building a product that’s looking for a problem. Just start with the problem back and solution from that. Just make sure you understand the problem very well.” (19:52)

“If you’re a data product manager today, you look at your data estate and you ask yourself, ‘What am I building to save money? When am I building to make money?’ If you can do both, that’s absolutely awesome. And so, the data product is an asset that has been built repeatedly by a team and generates value out of data.” – Bruno Aziza (23:12)

“[Machine learning is] hard because multiple teams have to work together, right? You got your business analyst over here, you’ve got your data scientists over there, they’re not even the same team. And so, sometimes you’re struggling with just the human aspect of it.” (30:30)

“As a data leader, an IT leader, you got to think about those soft ways to accomplish the stuff that’s binary, that’s the hard [stuff], right? I always joke, the hard stuff is the soft stuff for people like us because we think about data, we think about logic, we think, ‘Okay if it makes sense, it will be implemented.’ For most of us, getting stuff done is through people. And people are emotional, how can you express the feeling of achieving that goal in emotional value?” – Bruno Aziza (37:36)

Links As referenced by Bruno, “Good Product Manager/Bad Product Manager”: https://a16z.com/2012/06/15/good-product-managerbad-product-manager/ LinkedIn: https://www.linkedin.com/in/brunoaziza/ Bruno’s Medium Article on Competing Against Luck by Clayton M. Christensen: https://brunoaziza.medium.com/competing-against-luck-3daeee1c45d4 The Data CarCast on YouTube:  https://www.youtube.com/playlist?list=PLRXGFo1urN648lrm8NOKXfrCHzvIHeYyw

Summary

Managing end-to-end data flows becomes complex and unwieldy as the scale of data and its variety of applications in an organization grows. Part of this complexity is due to the transformation and orchestration of data living in disparate systems. The team at Upsolver is taking aim at this problem with the latest iteration of their platform in the form of SQLake. In this episode Ori Rafael explains how they are automating the creation and scheduling of orchestration flows and their related transforations in a unified SQL interface.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data and analytics leaders, 2023 is your year to sharpen your leadership skills, refine your strategies and lead with purpose. Join your peers at Gartner Data & Analytics Summit, March 20 – 22 in Orlando, FL for 3 days of expert guidance, peer networking and collaboration. Listeners can save $375 off standard rates with code GARTNERDA. Go to dataengineeringpodcast.com/gartnerda today to find out more. Truly leveraging and benefiting from streaming data is hard - the data stack is costly, difficult to use and still has limitations. Materialize breaks down those barriers with a true cloud-native streaming database - not simply a database that connects to streaming systems. With a PostgreSQL-compatible interface, you can now work with real-time data using ANSI SQL including the ability to perform multi-way complex joins, which support stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL. Go to dataengineeringpodcast.com/materialize today and sign up for early access to get started. If you like what you see and want to help make it better, they're hiring across all functions! Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/montecarlo to learn more. Your host is Tobias Macey and today I'm interviewing Ori Rafael about the SQLake feature for the Upsolver platform that automatically generates pipelines from your queries

Interview

Introduction How did you get involved in the area of data management? Can you describe what the SQLake product is and the story behind it?

What is the core problem that you are trying to solve?

What are some of the anti-patterns that you have seen teams adopt when designing and implementing DAGs in a tool such as Airlow? What are the benefits of merging the logic for transformation and orchestration into the same interface and dialect (SQL)? Can you describe the technical implementation of the SQLake feature? What does the workflow look like for designing and deploying pipelines in SQLake? What are the opportunities for using utilities such as dbt for managing logical complexity as the number of pipelines scales?

SQL has traditionally been challenging to compose. How did that factor into your design process for how to structure the dialect extensions for job scheduling?

What are some of the complexities that you have had to address in your orchestration system to be able to manage timeliness of operations as volume and complexity of the data scales? What are some of the edge cases that you have had to provide escape hatches for? What are the most interesting, innova

podcast_episode
by Dante DeAntonio (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

It's job's Friday and Mark argues the labor market is cooling off according to script. Colleagues Dante DeAntonio and Marisa DiNatale provide the details. And Cris fine-tunes his definition of the best way to describe the economy's performance in the year ahead - "Slowcession". Full episode transcript Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Wes McKinney is the creator of pandas, co-creator of Apache Arrow, and now Co-founder/CTO at Voltron Data. In this conversation with Tristan and Julia, Wes takes us on a tour of the underlying guts, from hardware to data formats, of the data ecosystem. What innovations, down to the hardware level, will stack to lead to significantly better performance for analytics workloads in the coming years? To dig deeper on the Apache Arrow ecosystem, check out replays from their recent conference at https://thedatathread.com. For full show notes and to read 7+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

Business domains have a range of data & analytics capabilities that enterprise data teams must support. The key is to ensure domain activity aligns with enterprise standards and best practices to ensure data consistency and avoid silos. Published at: https://www.eckerson.com/articles/an-operating-model-for-data-analytics-part-iv-red-team-composition

In this episode, Jason Foster talks to Marco Lau, Director of Data Science and Analytics at Penguin Random House, the international publishing business. They discuss how data science can help create value in the publishing industry and the use of data to predict market trends and make informed business decisions. Marco also talks about his background and role and explains how different teams at Penguin Random House collaborate to blend data science, art and traditional methods to drive business growth.

30 years of “corporate social responsibility” has left our planet in dire straits. Biodiversity loss, climate change, water pollution, micro-plastic pollution, air pollution, species collapse, ecosystem collapse…the list goes on. What can we all do individually and collectively as business leaders and responsible humans to turn the situation around? According to Simon Schillebeeckx from Handprint.tech it is possible to create incremental financial value while regenerating the ecosystems we rely on. Simon and his colleagues at Handprint have written a manifesto for saving the planet, called Regeneration First, that tells us exactly how this can be done. In this episode of Leaders of Analytics, we discuss: The current state of the many environmental issues facing us.The “Regeneration First” manifesto and the 7 action shifts needed in our approach to sustainability.Whose role it is to deal with climate changePromising climate technologies that will help us solve the negative impacts we’re having on the planetHow we create more short-term environmental incentives to deliver long-term impactWhat we can do individually to contribute to environmental regeneration, and much more.  Links: Simon on Linkedin: https://www.linkedin.com/in/simonschillebeeckx/ Some promising carbon removal solutions discussed on the A16Z podcast. The Road to 100 Percent Renewables in Australia via Energy Insiders.

On today’s episode, we’re joined by John Wills. John is the Field CTO at Alation, a data intelligence company that helps organizations find, understand and trust data.

We talk about:

  • John’s background and Alation.
  • Cataloging data within an organization.
  • How developers can access and use cataloged data.
  • Will data become more and more critical for organizations?
  • The friction between business growth and regulatory compliance.
  • The increasing complexity of data and how this impacts cataloging.
  • Different types of data marketplaces and the exchange between them.
  • The impact of machine learning and artificial intelligence on data cataloging.

John Wills - https://www.linkedin.com/in/johnwwills/ Alation - https://www.linkedin.com/company/alation/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

To kick off the new year, Mark, Cris, and Marisa share their U.S. economic outlooks for 2023. Which sectors are at risk? Will the Fed tip us in? We discuss the full gamut and introduce a new segment where we take listeners' questions. Full Episode Transcript Mark's Slowcession paper. Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.