talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Build powerful AI apps with Copilot in Microsoft Fabric | BRK225

Build new analytics and AI models and supercharge your intelligent app strategy across your organization. Increase developer velocity with Copilot in Fabric and empower your data scientists and data analysts with Semantic Link, bridging the world of business intelligence and AI. Train custom ML models with Azure ML and Fabric Data Science, democratizing AI across lines-of-business and increasing collaboration between data professionals and ML professionals.

To learn more, please check out these resources: * https://aka.ms/Ignite23CollectionsBRK225H * https://info.microsoft.com/ww-landing-contact-me-for-events-m365-in-person-events.html?LCID=en-us&ls=407628-contactme-formfill * https://aka.ms/azure-ignite2023-dataaiblog

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Justyna Lucznik * Nellie Gustafsson * Misha Desai * Thasmika Gokal * Abhishek Narain * Alex Powers * Alex van Grootel * Ed Donahue * Lukasz Pawlowski * Raj RIkhy * Wilson Lee

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This video is one of many sessions delivered for the Microsoft Ignite 2023 event. View sessions on-demand and learn more about Microsoft Ignite at https://ignite.microsoft.com

BRK225 | English (US) | Data

MSIgnite

Summary

The dbt project has become overwhelmingly popular across analytics and data engineering teams. While it is easy to adopt, there are many potential pitfalls. Dustin Dorsey and Cameron Cyr co-authored a practical guide to building your dbt project. In this episode they share their hard-won wisdom about how to build and scale your dbt projects.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro. That’s three free boards at dataengineeringpodcast.com/miro. Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Dustin Dorsey and Cameron Cyr about how to design your dbt projects

Interview

Introduction How did you get involved in the area of data management? What was your path to adoption of dbt?

What did you use prior to its existence? When/why/how did you start using it?

What are some of the common challenges that teams experience when getting started with dbt?

How does prior experience in analytics and/or software engineering impact those outcomes?

You recently wrote a book to give a crash course in best practices for dbt. What motivated you to invest that time and effort?

What new lessons did you learn about dbt in the process of writing the book?

The introduction of dbt is largely res

podcast_episode
by Fred Hochberg (Export–Import Bank of the United States) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Fred Hochberg, Fmr. Chairman & President of the Export–Import Bank of the United States and author of Trade Is Not a Four-Letter Word: How Six Everyday Products Make the Case for Trade, joins the Inside Economics team to discuss all things related to global trade. The discussion takes up the U.S.-China relationship, the future of globalization, and how trade policy may change after the 2024 election. Marisa’s visit to Disneyland is a (largely irrelevant but entertaining) theme throughout.   For more information on Fred Hochberg and his book click here Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight.  

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

If Data Vault is a new term for you, it's a data modeling design pattern. We're joined by Brandon Taylor, a senior data architect at Guild, and Michael Olschimke, who is the CEO of Scalefree—the consulting firm whose co-founder Dan Lindstedt is credited as the designer of the data vault architecture.  In this conversation with Tristan and Julia, Michael and Brandon explore the Data Vault approach among data warehouse design methodologies. They discuss Data Vault's adoption in Europe, its alignment with data mesh architecture, and the ongoing debate over Data Vault vs. Kimball methods.  For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

Fundamentals of Data Science

Fundamentals of Data Science: Theory and Practice presents basic and advanced concepts in data science along with real-life applications. The book provides students, researchers and professionals at different levels a good understanding of the concepts of data science, machine learning, data mining and analytics. Users will find the authors’ research experiences and achievements in data science applications, along with in-depth discussions on topics that are essential for data science projects, including pre-processing, that is carried out before applying predictive and descriptive data analysis tasks and proximity measures for numeric, categorical and mixed-type data. The book's authors include a systematic presentation of many predictive and descriptive learning algorithms, including recent developments that have successfully handled large datasets with high accuracy. In addition, a number of descriptive learning tasks are included. Presents the foundational concepts of data science along with advanced concepts and real-life applications for applied learning Includes coverage of a number of key topics such as data quality and pre-processing, proximity and validation, predictive data science, descriptive data science, ensemble learning, association rule mining, Big Data analytics, as well as incremental and distributed learning Provides updates on key applications of data science techniques in areas such as Computational Biology, Network Intrusion Detection, Natural Language Processing, Software Clone Detection, Financial Data Analysis, and Scientific Time Series Data Analysis Covers computer program code for implementing descriptive and predictive algorithms

Google Cloud Platform for Data Science: A Crash Course on Big Data, Machine Learning, and Data Analytics Services

This book is your practical and comprehensive guide to learning Google Cloud Platform (GCP) for data science, using only the free tier services offered by the platform. Data science and machine learning are increasingly becoming critical to businesses of all sizes, and the cloud provides a powerful platform for these applications. GCP offers a range of data science services that can be used to store, process, and analyze large datasets, and train and deploy machine learning models. The book is organized into seven chapters covering various topics such as GCP account setup, Google Colaboratory, Big Data and Machine Learning, Data Visualization and Business Intelligence, Data Processing and Transformation, Data Analytics and Storage, and Advanced Topics. Each chapter provides step-by-step instructions and examples illustrating how to use GCP services for data science and big data projects. Readers will learn how to set up a Google Colaboratory account and run Jupyternotebooks, access GCP services and data from Colaboratory, use BigQuery for data analytics, and deploy machine learning models using Vertex AI. The book also covers how to visualize data using Looker Data Studio, run data processing pipelines using Google Cloud Dataflow and Dataprep, and store data using Google Cloud Storage and SQL. What You Will Learn Set up a GCP account and project Explore BigQuery and its use cases, including machine learning Understand Google Cloud AI Platform and its capabilities Use Vertex AI for training and deploying machine learning models Explore Google Cloud Dataproc and its use cases for big data processing Create and share data visualizations and reports with Looker Data Studio Explore Google Cloud Dataflow and its use cases for batch and stream data processing Run data processing pipelines on Cloud Dataflow Explore Google Cloud Storageand its use cases for data storage Get an introduction to Google Cloud SQL and its use cases for relational databases Get an introduction to Google Cloud Pub/Sub and its use cases for real-time data streaming Who This Book Is For Data scientists, machine learning engineers, and analysts who want to learn how to use Google Cloud Platform (GCP) for their data science and big data projects

Real-Time analytics with open-source connectors in MS Fabric | OD46

This session focuses on the use of open-source connectors to enable real-time analytics in Microsoft Fabric and will cover the use of connectors such as Apache Kafka, Apache Flink, Apache Spark, Open Telemetry, Logstash etc. to ingest and process data in real-time. Attendees will learn how to analyze data ingested via open-source connectors to generate insights.

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Akshay Dixit

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This video is one of many sessions delivered for the Microsoft Ignite 2023 event. View sessions on-demand and learn more about Microsoft Ignite at https://ignite.microsoft.com

OD46 | English (US) | Data

MSIgnite

Join Avery Smith as he chats with data visualization expert Hana M.K. about the importance of presentation skills for data professionals in this engaging episode of the Data Career Podcast.

Discover how Hana helps data professionals improve their presentation abilities and land their dream jobs as she shares valuable insights and resources.

Don't miss out on this episode filled with tips, advice, and inspiration to become a skilled data presenter – tune in now to the Data Career Podcast with Avery Smith and special guest Hana M.K.!

Connect with Hana M.K.:

🤝 Connect on Linkedin ⁠🛣️ Download Data Presentation Roadmap

🎧 The Art of Communicating Data Podcast

🎒 Learn About the Trending Analytics

🤝 Ace your data analyst interview with the interview simulator

📩 Get my weekly email with helpful data career tips

📊 Come to my next free “How to Land Your First Data Job” training

🏫 Check out my 10-week data analytics bootcamp

Timestamps:

(09:33) - Start practicing your presentation NOW

(19:46) - The SECRET of presenting data effectively

(29:05) - No one WANTS to see your code. Show this instead.

Connect with Avery:

📺 Subscribe on YouTube

🎙Listen to My Podcast

👔 Connect with me on LinkedIn

📸 Instagram

🎵 TikTok

Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Data democratization is the buzzword to describe empowering enterprise stakeholders with data. While there have been advances in data management, governance, and analytics, something keeps getting in the way of achieving data democratization. Published at: https://www.eckerson.com/articles/data-democratization-and-the-duties-of-data-citizenship

Summary

Software development involves an interesting balance of creativity and repetition of patterns. Generative AI has accelerated the ability of developer tools to provide useful suggestions that speed up the work of engineers. Tabnine is one of the main platforms offering an AI powered assistant for software engineers. In this episode Eran Yahav shares the journey that he has taken in building this product and the ways that it enhances the ability of humans to get their work done, and when the humans have to adapt to the tool.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Your host is Tobias Macey and today I'm interviewing Eran Yahav about building an AI powered developer assistant at Tabnine

Interview

Introduction How did you get involved in machine learning? Can you describe what Tabnine is and the story behind it? What are the individual and organizational motivations for using AI to generate code?

What are the real-world limitations of generative AI for creating software? (e.g. size/complexity of the outputs, naming conventions, etc.) What are the elements of skepticism/overs

SAP S/4HANA Asset Management: Configure, Equip, and Manage your Enterprise

S/4HANA empowers enterprises to take big steps towards digitalization, innovation, and being mobile-friendly. This book is a concise guide to SAP S/4HANA Asset Management and will help you begin leveraging the platform’s capabilities quickly and efficiently. SAP S/4HANA Asset Management begins with an overview of the platform and its structure. You will learn how it can help with data storage and analysis, business processes, and reporting and analytics. As the book progresses, you will gain insight into single, time-based, performance-based, and multiple counter-based strategy plans. Any project is incomplete without a budget, and this book will help you understand how to use SAP S/4HANA to create and manage yours. The book’s real-life examples of asset management from contemporary industries reinforce each concept you learn, and its coverage of newer technologies and offerings in S/4HANA Asset Management will give you a sense of the immense potential offered by the platform. When you have finished this book, you will be ready to begin using SAP/S4HANA Asset Management to improve operational planning, maintenance, and scheduling activities in your own business. What You Will Learn Position S/4HANA Asset Management within the overall Business Applications suite Explore essential functionalities for enterprise asset hierarchy mapping Efficiently map both unplanned and planned maintenance activities Seamlessly integrate asset management, finance, controlling, and budgeting Unleash reporting and analytics in Asset Management Configure Asset Management to meet your S/4HANA requirements Who This Book Is For Consultants, project managers, and SAP users who are looking for a complete reference guide on S/4HANA Asset Management.

podcast_episode
by Cris deRitis , Teresa Bazemore (San Francisco Federal Home Loan Bank) , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Teresa Bazemore, CEO of the San Francisco Federal Home Loan Bank, joins the podcast to discuss the nation's reeling housing market, and the role of the FHLB system. There's a lot to talk about as Teresa weighs recent criticism of the FHLBs in the wake of the banking crisis earlier this year, and the recent report from the FHLBs' regulator, the Federal Housing Finance Agency, proposing reforms to the system. For more information about Teresa Bazemore click here Moody's Papers discussed in this episode click here and here Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In this talk I will explore how Mobile DevOps can significantly accelerate the mobile development lifecycle. I will dive deep into the strategies, tools, and best practices that empower mobile development teams to seamlessly transition from the development phase to production, all while maintaining the highest standards of quality and reliability.\n\nDuring this talk, you will discover:\n\nThe Mobile DevOps mindset: Understand the core principles and mindset shifts that are essential for integrating DevOps practices into your mobile development workflow.\n\nStreamlining development workflows: Learn how to optimize your development process to reduce bottlenecks and streamline collaboration between development, QA, and operations teams.\n\nAutomation and Continuous Integration/Continuous Deployment (CI/CD): Explore how automation tools and CI/CD pipelines can help you automate repetitive tasks, increase efficiency, and ensure consistent app delivery.\n\nMonitoring and feedback loops: Discover the importance of real-time monitoring, performance analytics, and user feedback in shaping a continuous improvement cycle for your mobile apps.\n\nThis talk will equip you with the knowledge and insights needed to harness the power of Mobile DevOps and accelerate your mobile app development journey.

Join Avery Smith as he chats with data analytics expert Matt Mike about the secrets to building impressive data analytics projects.

In this episode, Matt Mike shares valuable insights on the importance of creating unique projects and why building from scratch challenges you to become a true expert in the field, helping you stand out and speak passionately about your work in interviews.

Don't miss out. Tune in now!

Connect with Matt Mike:

🤝 Connect on Linkedin

▶️ Subscribe to Youtube Channel

🎒 Learn About Data Skills Tracker

🤝 Ace your data analyst interview with the interview simulator

📩 Get my weekly email with helpful data career tips

📊 Come to my next free “How to Land Your First Data Job” training

🏫 Check out my 10-week data analytics bootcamp

Timestamps:

(03:00) - Matt’s story from teaching to data

(20:23) - Building Personal Projects

(35:55) - LinkedIn networking is essential for data job hunting.

Connect with Avery:

📺 Subscribe on YouTube

🎙Listen to My Podcast

👔 Connect with me on LinkedIn

📸 Instagram

🎵 TikTok

Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Leading in Analytics

A step-by-step guide for business leaders who need to manage successful big data projects Leading in Analytics: The Critical Tasks for Executives to Master in the Age of Big Data takes you through the entire process of guiding an analytics initiative from inception to execution. You’ll learn which aspects of the project to pay attention to, the right questions to ask, and how to keep the project team focused on its mission to produce relevant and valuable project. As an executive, you can’t control every aspect of the process. But if you focus on high-impact factors that you can control, you can ensure an effective outcome. This book describes those factors and offers practical insight on how to get them right. Drawn from best-practice research in the field of analytics, the Manageable Tasks described in this book are specific to the goal of implementing big data tools at an enterprise level. A dream team of analytics and business experts have contributed their knowledge to show you how to choose the right business problem to address, put together the right team, gather the right data, select the right tools, and execute your strategic plan to produce an actionable result. Become an analytics-savvy executive with this valuable book. Ensure the success of analytics initiatives, maximize ROI, and draw value from big data Learn to define success and failure in analytics and big data projects Set your organization up for analytics success by identifying problems that have big data solutions Bring together the people, the tools, and the strategies that are right for the job By learning to pay attention to critical tasks in every analytics project, non-technical executives and strategic planners can guide their organizations to measurable results.

Summary

Databases are the core of most applications, but they are often treated as inscrutable black boxes. When an application is slow, there is a good probability that the database needs some attention. In this episode Lukas Fittl shares some hard-won wisdom about the causes and solution of many performance bottlenecks and the work that he is doing to shine some light on PostgreSQL to make it easier to understand how to keep it running smoothly.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold Your host is Tobias Macey and today I'm interviewing Lukas Fittl about optimizing your database performance and tips for tuning Postgres

Interview

Introduction How did you get involved in the area of data management? What are the different ways that database performance problems impact the business? What are the most common contributors to performance issues? What are the useful signals that indicate performance challenges in the database?

For a given symptom, what are the steps that you recommend for determining the proximate cause?

What are the potential negative impacts to be aware of when tu

podcast_episode
by Dante DeAntonio (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Dante joins the podcast to break down the October employment report. With job growth moderating and the unemployment rate edging higher, the Fed’s fight against inflation should get a little bit easier. The team also takes a few listener questions about the definition of the unemployment rate and what impact softening rents will have on single-family housing.    Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Jonathan Frankle is the Chief Scientist at MosaicML, which was recently bought by Databricks for $1.3 billion.  MosaicML helps customers train generative AI models on their data. Lots of companies are excited about gen AI, and the hope is that their company data and information will be what sets them apart from the competition.  In this conversation with Tristan and Julia, Jonathan discusses a potential future where you can train specialized, purpose-built models, the future of MosaicML inside of Databricks, and the importance of responsible AI practices. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

In this special episode, we explore the world of Chief Data Officers and their complex challenges. Join us as we share insights from our recent 'Ask Me Anything' session, where data leaders asked their burning questions anonymously, creating a safe space for open discussion. Jason Foster, CEO at Cynozure, and Helen Blaikie, Chief Data and Analytics Officer at Aston University, answer all these complex questions and share their views and thoughts on data valuation, success metrics, overcoming resistance to change, CDO placement, and more. Tune in to gain valuable insights into the world of data leadership.