talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Summary Data analysis is a valuable exercise that is often out of reach of non-technical users as a result of the complexity of data systems. In order to lower the barrier to entry Ryan Buick created the Canvas application with a spreadsheet oriented workflow that is understandable to a wide audience. In this episode Ryan explains how he and his team have designed their platform to bring everyone onto a level playing field and the benefits that it provides to the organization.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Unstruk is the DataOps platform for your unstructured data. The options for ingesting, organizing, and curating unstructured files are complex, expensive, and bespoke. Unstruk Data is changing that equation with their platform approach to manage your unstructured assets. Built to handle all of your real-world data, from videos and images, to 3d point clouds and geospatial records, to industry specific file formats, Unstruk streamlines your workflow by converting human hours into machine minutes, and automatically alerting you to insights found in your dark data. Unstruk handles data versioning, lineage tracking, duplicate detection, consistency validation, as well as enrichment through sources including machine learning models, 3rd party data, and web APIs. Go to dataengineeringpodcast.com/unstruk today to transform your messy collection of unstructured data files into actionable assets that power your business. Your host is Tobias Macey and today I’m interviewing Ryan Buick about Canvas, a spreadsheet interface for your data that lets everyone on your team explore data without having to learn SQL

Interview

Introduction How did you get involved

podcast_episode
by Anna Stansbury (MIT Sloan School of Management) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Anna Stansbury, assistant professor of work and organization studies at the MIT Sloan School of Management, joins the podcast to discuss the lack of diversity in the economics profession. The outcome of the latest FOMC meeting is debated and the discussion goes off the music sheet!?! (Zandism) Full Episode Transcript For more from Anna Stansbury, follow her @annastansbury  Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Matt Bornstein and Jennifer Li (and their co-author Martin Casado) of a16z have compiled arguably the most nuanced diagram of the data ecosystem ever made.  They recently refreshed their classic 2020 post, "Emerging Architectures for Modern Data Infrastructure" and in this conversation, Tristan attempts to pin down: what does all of this innovation in tooling mean for data people + the work we're capable of doing? When will the glorious future come to our laptops? For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

Send us a text Datatopics is hosted by Kevin Missoorten and typically joined by multiple guests. In a side step from our regular Tour de Tools series, we talk about the fuzzy and misunderstood concepts in the world of data, analytics and AI. In miniserie format, we discuss the different angles of these fuzzy datatopics to get to the bottom of things. In this third episode we explore the organisational aspect of going "Data Mesh" - which we concluded in our 2nd episode seems to be the essential part of it.

Music: The Gentlemen - DivKid

In a recent conversation with data warehousing legend Bill Inmon, I learned about a new way to structure your data warehouse and self-service BI environment called the Unified Star Schema. The Unified Star Schema is potentially a small revolution for data analysts and business users as it allows them to easily join tables in a data warehouse or BI platform through a bridge. This gives users the ability to spend time and effort on discovering insights rather than dealing with data connectivity challenges and joining pitfalls. Behind this deceptively simple and ingenious invention is author and data modelling innovator Francesco Puppini. Francesco and Bill have co-written the book ‘The Unified Star Schema: An Agile and Resilient Approach to Data Warehouse and Analytics Design’ to allow data modellers around the world to take advantage of the Unified Star Schema and its possibilities. Listen to this episode of Leaders of Analytics, where we explore: What the Unified Star Schema is and why we need itHow Francesco came up with the concept of the USSReal-life examples of how to use the USSThe benefits of a USS over a traditional star schema galaxyHow Francesco sees the USS and data warehousing evolving in the next 5-10 years to keep up with new demands in data science and AI, and much more.Connect with Francesco Francesco on Linkedin: https://www.linkedin.com/in/francescopuppini/ Francesco's book on the USS: https://www.goodreads.com/author/show/20792240.Francesco_Puppini

Advanced Analytics with PySpark

The amount of data being generated today is staggering and growing. Apache Spark has emerged as the de facto tool to analyze big data and is now a critical part of the data science toolbox. Updated for Spark 3.0, this practical guide brings together Spark, statistical methods, and real-world datasets to teach you how to approach analytics problems using PySpark, Spark's Python API, and other best practices in Spark programming. Data scientists Akash Tandon, Sandy Ryza, Uri Laserson, Sean Owen, and Josh Wills offer an introduction to the Spark ecosystem, then dive into patterns that apply common techniques-including classification, clustering, collaborative filtering, and anomaly detection, to fields such as genomics, security, and finance. This updated edition also covers NLP and image processing. If you have a basic understanding of machine learning and statistics and you program in Python, this book will get you started with large-scale data analysis. Familiarize yourself with Spark's programming model and ecosystem Learn general approaches in data science Examine complete implementations that analyze large public datasets Discover which machine learning tools make sense for particular problems Explore code that can be adapted to many uses

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Sheila Bair (Federal Deposit Insurance Corporation (FDIC); Banking Advisory Group) , Ryan Sweet

Sheila Bair, Member of Banking Advisory Group and former Chair of the U.S. Federal Deposit Insurance Corporation, joins the podcast to discuss the policy response to the Great Recession, concerns about today's U.S. economy, including student debt.  Student Loan Debt Calculator For more from Sheila Bair follow her @SheilaBair2013 Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Today’s data architecture discussions are heavily biased toward managing data for analytics, with attention to big data, scalability, cloud, and cross-platform data management. We need to acknowledge analytics bias and address management of operational data. Ignoring operational data architecture is a sure path to technical debt and future data management pain. Published at: https://www.eckerson.com/articles/the-yin-and-yang-of-data-architecture

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Anthony Mandelli (Coin Metrics) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

The Web3 world—blockchain, cryptocurrency, NFTs, and more—seems to be everywhere these days. And, as analysts, how could we not salivate at the idea of a data set that is just one flat, immutable, ever-growing table with a handful of columns (aka… a blockchain-powered public ledger)? We sat down with Anthony Mandelli from Coin Metrics to see whether Tim and Moe could be moved from "totally clueless" to "barely knowledgeable" on the topic in a single hour (Michael was already a knowledgeable enthusiast). The jury is out as to whether we were successful, but stay tuned for the upcoming announcement of the Analytics Power Hour DAO we're starting up (we're minting RockFlag coins to make it happen). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

When many people talk about leading effective Data Science teams in large organizations, it’s easy for them to forget how much effort, intentionality, vision, and leadership are involved in the process.

Glenn Hofmann, Chief Analytics Officer at New York Life Insurance, is no stranger to that work. With over 20 years of global leadership experience in data, analytics, and AI that spans the US, Germany, and South Africa, Glenn knows firsthand what it takes to build an effective data science function within a large organization.

In this episode, we talk about how he built NeW York Life Insurance’s 50-person data science and AI function, how they utilize skillsets to offer different career paths for data scientists, building relationships across the organization, and so much more.

[Announcement] Join us for DataCamp Radar, our digital summit on June 23rd. During this summit, a variety of experts from different backgrounds will be discussing everything related to the future of careers in data. Whether you're recruiting for data roles or looking to build a career in data, there’s definitely something for you. Seats are limited, and registration is free, so secure your spot today on https://events.datacamp.com/radar/

Summary Building a well rounded and effective data team is an iterative process, and the first hire can set the stage for future success or failure. Trupti Natu has been the first data hire multiple times and gone through the process of building teams across the different stages of growth. In this episode she shares her thoughts and insights on how to be intentional about establishing your own data team.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking all of that information into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how you can take advantage of active metadata and escape the chaos. Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Unstruk is the DataOps platform for your unstructured data. The options for ingesting, organizing, and curating unstructured files are complex, expensive, and bespoke. Unstruk Data is changing that equation with their platform approach to manage your unstructured assets. Built to handle all of your real-world data, from videos and images, to 3d point clouds and geospatial records, to industry specific file formats, Unstruk streamlines your workflow by converting human hours into machine minutes, and automatically alerting you to insights found in your dark data. Unstruk handles data versioning, lineage tracking, duplicate detection, consistency vali

Mark, Ryan, and Cris work overtime on a Saturday to break down the Consumer Price Index Report. Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

AI-Powered Business Intelligence

Use business intelligence to power corporate growth, increase efficiency, and improve corporate decision making. With this practical book featuring hands-on examples in Power BI with basic Python and R code, you'll explore the most relevant AI use cases for BI, including improved forecasting, automated classification, and AI-powered recommendations. And you'll learn how to draw insights from unstructured data sources like text, document, and image files. Author Tobias Zwingmann helps BI professionals, business analysts, and data analytics understand high-impact areas of artificial intelligence. You'll learn how to leverage popular AI-as-a-service and AutoML platforms to ship enterprise-grade proofs of concept without the help of software engineers or data scientists. Learn how AI can generate business impact in BI environments Use AutoML for automated classification and improved forecasting Implement recommendation services to support decision-making Draw insights from text data at scale with NLP services Extract information from documents and images with computer vision services Build interactive user frontends for AI-powered dashboard prototypes Implement an end-to-end case study for building an AI-powered customer analytics dashboard

Summary The best way to make sure that you don’t leak sensitive data is to never have it in the first place. The team at Skyflow decided that the second best way is to build a storage system dedicated to securely managing your sensitive information and making it easy to integrate with your applications and data systems. In this episode Sean Falconer explains the idea of a data privacy vault and how this new architectural element can drastically reduce the potential for making a mistake with how you manage regulated or personally identifiable information.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking all of that information into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how you can take advantage of active metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Sean Falconer about the idea of a data privacy vault and how the Skyflow team are working to make it turn-key

Interview

Introduction How did you get involved in the area of data management? Can you describe what Skyflow is and the story behind it? What is a "data privacy vault" and how does it differ from strategies such as privacy engineering or existing data governance patterns? What are the primary use cases and capabilities that you are focused on solving for with Skyflow?

Who is the target customer for Skyflow (e.g. how does it enter an organization)?

How is the Skyflow platform architected?

How have the design and goals of the system changed or evolved over time?

Can you describe the process of integrating with Skyflow at the application level? For organizations that are building analytical capabilities on top of the data managed in their applications, what are the interactions with Skyflow at each of the stages in the data lifecycle? One of the perennial problems with distributed systems is the challenge of joining data across machine boundaries. How do you mitigate that problem? On your website there are different "vaults" advertised in the form of healthcare, fintech, and PII. What are the different requirements across each of those problem domains?

What are the commonalities?

As a relatively new company in an emerging product category, what are some of the customer education challenges that you are facing? What are the most interesting, innovative, or unexpected ways that you have seen Skyflow used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Skyflow? When is Skyflow the wrong choice? What do you have planned for the future of Skyflow?

Contact Info

LinkedIn @seanfalconer on Twitter Website

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

Skyflow Privacy Engineering Data Governance Homomorphic Encryption Polymorphic Encryption

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Summary Cloud services have made highly scalable and performant data platforms economical and manageable for data teams. However, they are still challenging to work with and manage for anyone who isn’t in a technical role. Hung Dang understood the need to make data more accessible to the entire organization and created Y42 as a better user experience on top of the "modern data stack". In this episode he shares how he designed the platform to support the full spectrum of technical expertise in an organization and the interesting engineering challenges involved.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog Your host is Tobias Macey and today I’m interviewing Hung Dang about Y42, the full-stack data platform that anyone can run

Interview

Introduction How did you get involved in the area of data management? Can you describe what Y42 is and the story behind it? How would you characterize your positioning in the data ecosystem? What are the problems that you are trying to solve?

Who are the personas that you optimize for and how does that manifest in your product design and feature priorities?

How is the Y42 platform implemented?

What are the core engineering problems that you have had to address in order to tie together the various underlying services that you integrate? How have the design and goals of the product changed or evolved since you started working on it?

What are the sharp edges and failure conditions that you have had to automate around in order to support non-technical users? What is the process for integrating Y42 with an organization’s data systems?

What is the story for onboarding from existing systems and importing workflows (e.g. Airflow d

Modern analytics teams are central business functions directly and indirectly responsible for increasing revenue, reducing costs, optimising processes and improving customer and employee satisfaction. But there are many obstacles along the way. Data needs collecting, projects need careful design and execution and stakeholders need convincing. Analytics teams are required to cover a wide range of technical knowledge, business acumen and leadership skills to be impactful. What is the recipe for creating analytics teams that deliver impactful solutions and drive real business value? What are the technical, interpersonal and leadership skills required to lead the business through change and adoption of analytics? To answer these questions, and many more relating to the art and science of building excellent analytics functions, I recently spoke to John K. Thompson. John is an international data and technology executive with over 30 years of experience in business intelligence and advanced analytics and author of the best-seller ‘Building Analytics Teams’. In this episode of Leaders of Analytics, we discuss: The hallmarks of an excellent analytics teamWhat a perfect analytics team looks likeThe skills, personality traits and behaviours you need in an analytics teamThe common traits of highly effective analytics leadersHow analytics leaders set themselves up to meet the expectations of business stakeholdersHow to select and prioritise the right projects to work onWhere organisations typically fail when designing analytics teamsThe lowdown on John’s upcoming book, and much more.John on LinkedIn: https://www.linkedin.com/in/johnkthompson/ John's book 'Building Analytics Teams': https://www.packtpub.com/product/building-analytics-teams/9781800203167

Mark, Ryan, and Cris welcome back Marisa DiNatale, Senior Director at Moody's Analytics, to discuss the May U.S. Employment Report. They also debate whether the economy is strong or weak and if it's possible we can talk ourselves into a recession. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

ClickHouse, the lightning-fast open source OLAP database, was initially released in 2016 as an open source project out of Yandex, the Russian search giant. In 2021, Aaron Katz helped form a group to spin it out of Yandex as an independent company, dedicated to the development + commercialization of the open source project. In this conversation with Tristan and Julia, Aaron gets into why he believes open source, independent software companies are the future. And of course, this conversation wouldn't be complete without a riff on the classic "one database to rule all workloads" thread. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Gene Seroka (Port of Los Angeles) , Ryan Sweet

Mark, Ryan, and Cris welcome Gene Seroka, Executive Director of the Port of Los Angeles, to discuss current global supply chain conditions and economic implications. Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

podcast_episode
by Mico Yuk (Data Storytelling Academy) , Jordan Morrow (Brainstorm, Inc.)

We have all seen massive shifts in the world of data literacy in the last few years, with the pandemic having undeniable and sometimes surprising effects on the professional landscape. Joining me to talk about these changes and the skills that can best serve data literacy in the contemporary climate is returning guest and one of the godfathers of data literacy, Jordan Morrow! Jordan was previously a guest on Episode 66, so I recommend listening to that masterclass first if you have not already! In today's chat we get into some of Jordan's amazing frameworks from his new book, Be Data Literate, The Data Literacy Skills Everyone Needs To Succeed with Jordan giving listeners an inside scoop on his amazing ideas and practical wisdom!   In this episode, you'll learn: [0:15:41] How data literacy has infiltrated most spheres and the roles of many other professionals.  [0:16:20] Jordan explains the four levels of analytics; descriptive, diagnostic, predictive, and prescriptive. [0:24:05] Generational gaps in dealing with data; Jordan's advice for improvement.  [0:31:52] Jordan shares his six-step framework. [0:34:11] Some advice from Jordan to women working in the world of data literacy. [0:38:09] Getting to grips with Jordan's 'three Cs' of data literacy and the value of curiosity, creativity, and critical thinking! [0:43:00] Jordan comments on strategies for building rapport, facilitating relationships, and the need for interpersonal skills.   [0:49:51] Exciting information about the raffle for Jordan's book, exclusive to AOF listeners!   For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/86   Enjoyed the Show?  Please leave us a review on iTunes.