talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

The Return on Analytics Engineering

As analytics engineers and data people, we know the value we create in our own blood, sweat, and dbt models. But how is this value actually realized in practice? In this talk, David Jayatillake (Metaplane) draws on his experiences to discuss the processes, ways of thinking, tooling, and governance needed to realize the benefits from analytics engineering work in the greater organization.

Check the slides here: https://docs.google.com/presentation/d/1VmmqNQsrv1t0uuV81O6PJQ1XASyLRGxvAdB8eWIG9TQ/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

When analysts outnumber engineers 5 to 1: Our journey with dbt at M1

How do you train and enable 20 data analysts to use dbt Core in a short amount of time?

At M1, engineering and analytics are far apart on the org chart, but work hand-in-hand every day. M1 engineering has a culture that celebrates open source, where every data engineer is trained and empowered to work all the way down the infrastructure stack, using tools like Terraform and Kubernetes. The analytics team is comprised of strong SQL writers who use Tableau to create visualizations used company wide. When M1 knew they needed a tool like dbt for change management and data documentation generation, they had to figure out how to bridge the gap between engineering and analytics to enable analysts to contribute with minimal engineering intervention. Join Kelly Wachtel, a senior data engineer at M1, explain how they trained about 20 analysts to use git and dbt Core over the past year, and strengthened their collaboration between their data engineering and analytics teams.

Check the slides here: https://docs.google.com/presentation/d/1CWI97EMyLIz6tptLPKt4VuMjJzV_X3oO/edit?usp=sharing&ouid=110293204340061069659&rtpof=true&sd=true

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Do you really need a data-driven culture? Maybe not. According to Bill Schmarzo, the CEO’s mandate is to become value-driven, not data-driven. For analytics teams that means one thing: no one cares about your data, they want results! In this episode of Leaders of Analytics, Bill and I explore the economics of data & analytics and how to drive powerful decisions with data. Decisions that turn into business value. Bill is the author of four text books and one comic book on generating value with analytics. He is a long-serving business executive, adjunct professor, university educator and global influencer in the sphere of big data, digital transformation and data & analytics leadership. In this episode of Leaders of Analytics, we discuss: Why Bill has split his career between corporate leadership and educationWhat value engineering is and how it pertains to data and analyticsHow to determine the economic value of data and analyticsWhy data management the single most important business discipline in the 21st century, and much more.Bill's website: https://deanofbigdata.com/ Bill on LinkedIn: https://www.linkedin.com/in/schmarzo/ Bill on Twitter: https://twitter.com/schmarzo  

While securing the support of senior executives is a major hurdle of implementing a data transformation program, it’s often one of the earliest and easiest hurdles to overcome in comparison to the overall program itself. Leading a data transformation program requires thorough planning, organization-wide collaboration, careful execution, robust testing, and so much more.

Vanessa Gonzalez is the Senior Director of Data and Analytics for ML & AI at Transamerica. Vanessa has experience in data transformation, leadership, and strategic direction for Data Science and Data Governance teams, and is an experienced senior data manager.

Vanessa joins the show to share how she is helping to lead Transamerica’s Data Transformation program. In this episode, we discuss the biggest challenges Transamerica has faced throughout the process, the most important factors to making any large-scale transformation successful, how to collaborate with other departments, how Vanessa structures her team, the key skills data scientists need to be successful, and much more.

Check out this month’s events: https://www.datacamp.com/data-driven-organizations-2022

Summary Agile methodologies have been adopted by a majority of teams for building software applications. Applying those same practices to data can prove challenging due to the number of systems that need to be included to implement a complete feature. In this episode Shane Gibson shares practical advice and insights from his years of experience as a consultant and engineer working in data about how to adopt agile principles in your data work so that you can move faster and provide more value to the business, while building systems that are maintainable and adaptable.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Macey and today I’m interviewing Shane Gibson about how to bring Agile practices to your data management workflows

Interview

Introduction How did you get involved in the area of data management? Can you describe what AgileData is and the story behind it? What are the main industries and/or use cases that you are focused on supporting? The data ecosystem has been trying on different paradigms from software development for some time now (e.g. DataOps, version control, etc.). What are the aspects of Agile that do and don’t map well to data engineering/analysis? One of the perennial challenges of data analysis is how to approach data modeling. How do you balance the need to provide value with the long-term impacts of incomplete or underinformed modeling decisions made in haste at the beginning of a project?

How do you design in affordances for refactoring of the data models without breaking downstream assets?

Another aspect of implementing data products/platforms is how to manage permissions and governance. What are the incremental ways that those principles can be incorporated early and evolved along with the overall analytical products? What are some of the organizational design strategies that you find most helpful when establishing or training a team who is working on data products? In order to have a useful target to work toward it’s necessary to understand what the data consumers are hoping to achieve. What are some of the challenges of doing requirements gathering for data products? (e.g. not knowing what information is available, consumers not understanding what’s hard vs. easy, etc.)

How do you work with the "customers" to help them understand what a reasonable scope is and translate that to the actual project stages for the engineers?

What are some of the perennial questions or points of confusion that you have had to address with your clients on how to design and implement analytical assets? What are the most interesting, innovative, or unexpected ways that you have seen agile principles used for data? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AgileData? When is agile the wrong choice for a data project? What do you have planned for the future of AgileData?

Contact Info

LinkedIn @shagility on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

AgileData OptimalBI How To Make Toast Data Mesh Information Product Canvas DataKitchen

Podcast Episode

Great Expectations

Podcast Episode

Soda Data

Podcast Episode

Google DataStore Unfix.work Activity Schema

Podcast Episode

Data Vault

Podcast Episode

Star Schema Lean Methodology Scrum Kanban

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Sponsored By: Atlan: Atlan

Have you ever woken up to a crisis because a number on a dashboard is broken and no one knows why? Or sent out frustrating slack messages trying to find the right data set? Or tried to understand what a column name means?

Our friends at Atlan started out as a data team themselves and faced all this collaboration chaos themselves, and started building Atlan as an internal tool for themselves. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more.

Go to dataengineeringpodcast.com/atlan and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription.Prefect: Prefect

Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit…

On Ryan's final episode of Inside Economics, John Leer, Chief Economist of Morning Consult, joins the podcast to discuss the state of the economy, consumer sentiment, inflation expectations, and the potential early signs of a wage-price spiral.  Full episode transcript. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In today’s episode, we’re talking to W. Curtis Preston, Chief Technical Evangelist at Druva. Druva enables cyber, data and operational resilience for organizations with its Data Resiliency Cloud.

We cover a wide range of fascinating topics, including:

W. Curtis’ background and how he came to join Druva.The problems Druva solves and the customers it serves.What security issues should we be paying more attention to in SaaS?The security challenges with passwords and multi-factor authentication.The importance of backups for SaaS vendors and customers.Why SaaS companies should consider hiring a tech evangelist.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

Mark, Ryan, and Cris welcome back Aaron Klein, Miriam K. Carliner Chair and Senior Fellow at the Brookings Institution, to discuss stress points in the global financial system, the conditions for a financial crisis, and whether central banks are going to break something.  Full episode transcript. Follow  @aarondklein on twitter Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.  

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hybrid development teams are critical to the success of a data & analytics program. Data leaders must invest time, energy, and thought to the creation of these teams and how best to support them. It’s critical that they allocate staff time to nurture knowledge flows between component groups. Published at: https://www.eckerson.com/articles/an-operating-model-for-data-analytics-part-ii-knowledge-flows

Today I’m chatting with Iván Herrero Bartolomé, Chief Data Officer at Grupo Intercorp. Iván describes how he was prompted to write his new article in CDO Magazine, “CDOs, Let’s Get Out of Our Comfort Zone” as he recognized the importance of driving cultural change within organizations in order to optimize the use of data. Listen in to find out how Iván is leveraging the role of the analytics translator to drive this cultural shift, as well as the challenges and benefits he sees data leaders encounter as they move from tactical to strategic objectives. Iván also reveals the number one piece of advice he’d give CDOs who are struggling with adoption. 

Highlights / Skip to:

Iván explains what prompted him to write his new article, “CDOs, Let’s Get Out of Our Comfort Zone” (01:08) What Iván feels is necessary for data leaders to close the gap between data and the rest of the business and why (03:44) Iván dives into who he feels really owns delivery of value when taking on new data science and analytics projects (09:50) How Iván’s team went from managing technical projects that often didn’t make it to production to working on strategic projects that almost always make it to production (13:06) The framework Iván has developed to upskill technical and business roles to be effective data / analytics translators (16:32) The challenge Iván sees data leaders face as they move from setting and measuring tactical goals to moving towards strategic goals and initiatives (24:12) Iván explains how the C-Suite’s attitude impacts the cross-functional role of data & analytics leadership (28:55) The number one piece of advice Iván would give new CDO’s struggling with low adoption of their data products and solutions (31:45)

Quotes from Today’s Episode “We’re going to do all our best to ensure that [...] everything that is expected from us is done in the best possible way. But that’s not going to be enough. We need a sponsorship and we need someone accountable for the project and someone who will be pushing and enabling the use of the solution once we are gone. Because we cannot stay forever in every company.” – Iván Herrero Bartolomé (10:52)

“We are trying to upskill people from the business to become data translators, but that’s going to take time. Especially what we try to do is to take product owners and give them a high-level immersion on the state-of-the-art and the possibilities that data analytics bring to the table. But as we can’t rely on our companies having this kind of talent and these data translators, they are one of the profiles that we bring in for every project that we work on.” – Iván Herrero Bartolomé (13:51)

“There’s a lot to do, not just between data and analytics and the other areas of the company, but aligning the incentives of all the organization towards the same goals in a way that there’s no friction between the goals of the different areas, the people, [...]  and the final goals of the organization. – Iván Herrero Bartolomé (23:13) “Deciding which goals are you going to be co-responsible for, I think that is a sophisticated process that it’s not mastered by many companies nowadays. That probably is one of the main blockers keeping data analytics areas working far from their business counterparts” – Iván Herrero Bartolomé (26:05)

“When the C-suite looks at data and analytics, if they think these are just technical skills, then the data analytics team are just going to behave as technical people. And many, many data analytics teams are set up as part of the IT organization. So, I think it all begins somehow with how the C-suite of our companies look at us.” – Iván Herrero Bartolomé (28:55) “For me, [digital] means much more than the technical development of solutions; it should also be part of the transformation of the company, both in how companies develop relationships with their customers, but also inside how every process in the companies becomes more nimble and can react faster to the changes in the market.” – Iván Herrero Bartolomé (30:49) “When you feel that everyone else not doing what you think they should be doing, think twice about whether it is they who are not doing what they should be doing or if it’s something that you are not doing properly.” – Iván Herrero Bartolomé (31:45)

Links “CDOs, Let’s Get Out of Our Comfort Zone”: https://www.cdomagazine.tech/cdo_magazine/topics/opinion/cdos-lets-get-out-of-our-comfort-zone/article_dce87fce-2479-11ed-a0f4-03b95765b4dc.html LinkedIn: https://www.linkedin.com/in/ivan-herrero-bartolome/

Great analytics teams understand that they are responsible for two things concurrently: production and consumption. Most analytics teams master the production part well. After all, that’s why they exist, to produce analytics. However, analytics only matter if someone consumes them and makes valuable decisions as a result. “Decision + value” is what we’re after. To be able to make valuable decisions from analytics, consumers must be data and analytics literate, and that often comes down to education and culture creation. So, how do you build analytics literacy in your organisation? In this episode of Leaders of Analytics, Ben Jarvis, Head of Scaled Customer Services and Operations AUNZ at Google, answers this question and many more related to building a strong analytics culture. Listen to learn: How Ben went from practicing law to becoming a senior analytics leader and operational GMHow to coach and mentor technical and non-technical stakeholders on data and analytics literacyHow do traditional businesses that aren’t born out of the internet era can transform into data-driven and analytics-literate organisations, and much more.Connect with Ben on LinkedIn: https://www.linkedin.com/in/ben-stuart-jarvis/

As data leaders continue to fill their talent gap, how should they approach sourcing, retaining, and upskilling their talent? What strategies should data leaders adopt in order to accomplish their talent goals and become data-driven?

Kyle Winterbottom joins the show to talk about the key differentiators between data teams that build talent-dense teams and those that do not. Kyle is the host of Driven by Data: The Podcast, the Founder & CEO of Orbition, a talent solutions provider, for scaling Data, Analytics, & Artificial Intelligence teams across the UK, Europe and the USA. As an accomplished expert and thought leader in talent acquisition, attraction, and retention, as well as scaling data teams, Kyle was named one of Data IQ’s 100 Most Influential People in Data for 2022.

In this episode, we talk about how data teams can position themselves to attract top talent, how to properly articulate how data team members are adding value to the business, how organizations can accidentally set data leaders up to fail, how to approach upskilling, and how data leaders can create an employer branding narrative to attract top talent.

Check out this month’s events: https://www.datacamp.com/data-driven-organizations-2022

Summary Logistics and supply chains are under increased stress and scrutiny in recent years. In order to stay ahead of customer demands, businesses need to be able to react quickly and intelligently to changes, which requires fast and accurate insights into their operations. Pathway is a streaming database engine that embeds artificial intelligence into the storage, with functionality designed to support the spatiotemporal data that is crucial for shipping and logistics. In this episode Adrian Kosowski explains how the Pathway product got started, how its design simplifies the creation of data products that support supply chain operations, and how developers can help to build an ecosystem of applications that allow businesses to accelerate their time to insight.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live s

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Colleague Marisa DiNatale, Director Economist at Moody's Analytics, joins Mark and Cris to breakdown the September Consumer Price Index Report. They also discuss the impact of inflation on energy prices, food prices, the housing market, and wage growth. Full episode transcript. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.  

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Data orchestration uses caching, APIs, and centralized metadata to help compute engines access data in hybrid or multi-cloud environments. Data platform engineers can use data orchestration to gain simple, flexible, and high-speed access to distributed data for modern analytics and AI projects. Published at: https://www.eckerson.com/articles/data-orchestration-simplifying-data-access-for-analytics

In today’s episode, we’re talking to Andy Serwatuk, Director of Solutions Architecture at Onix Networking Corp., a  Google Cloud Premier Partner enabling companies to effectively leverage the Google Cloud Platform across industries and use cases.

We discuss:

Andy’s background and how he started at Onix.The differences between SaaS and non-SaaS companies.Is Google Cloud a no-brainer for SaaS companies today?The value of outsourcing tasks to citizens.How can SaaS companies learn more about IoT and other emerging trends? …and much more.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

Operations vs. product: The data definition showdown

As organizations continue scaling their data investments, analytics practitioners are increasingly getting exposed to more areas of the business. In the process, they are learning that often times there are discrepancies in how key business metrics are being defined across teams. After encountering this first hand in her transition from business operations to product analytics, Nadja Jury (Education Perfect) became curious to learn how others were navigating this reality and set out to on a research mission. Nadja joins us at Coalesce to share her learnings from the field, and build more community around the conversation.

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Summary The core of any data platform is the centralized storage and processing layer. For many that is a data warehouse, but in order to support a diverse and constantly changing set of uses and technologies the data lakehouse is a paradigm that offers a useful balance of scale and cost, with performance and ease of use. In order to make the data lakehouse available to a wider audience the team at Iomete built an all-in-one service that handles management and integration of the various technologies so that you can worry about answering important business questions. In this episode Vusal Dadalov explains how the platform is implemented, the motivation for a truly open architecture, and how they have invested in integrating with the broader ecosystem to make it easy for you to get started.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transpar

Colleague Dante DeAntonio, Senior Economist at Moody's Analytics joins the podcast to analyze the September U.S. Employment Report and OPEC's announcement to cut oil production. Everyone gives their latest odds of a recession and how soon that could happen.  Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In this episode, Jason Foster talks to David Bader, Distinguished Professor of the Department of Data Science at the New Jersey Institute of Technology. They talk about building massive scale analytics, how to use a large amount of data to gain insights, the complexity of the data set and how to bridge the gap between architecture and algorithms. David also shares his notable experience, talks about capabilities and skills data departments require to run large-scale data projects and explores some use cases in diverse industries.