talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Observability Engineering

Observability is critical for building, changing, and understanding the software that powers complex modern systems. Teams that adopt observability are much better equipped to ship code swiftly and confidently, identify outliers and aberrant behaviors, and understand the experience of each and every user. This practical book explains the value of observable systems and shows you how to practice observability-driven development. Authors Charity Majors, Liz Fong-Jones, and George Miranda from Honeycomb explain what constitutes good observability, show you how to improve upon what you're doing today, and provide practical dos and don'ts for migrating from legacy tooling, such as metrics, monitoring, and log management. You'll also learn the impact observability has on organizational culture (and vice versa). You'll explore: How the concept of observability applies to managing software at scale The value of practicing observability when delivering complex cloud native applications and systems The impact observability has across the entire software development lifecycle How and why different functional teams use observability with service-level objectives How to instrument your code to help future engineers understand the code you wrote today How to produce quality code for context-aware system debugging and maintenance How data-rich analytics can help you debug elusive issues

This is the second episode of a two-part series of Leaders of Analytics featuring global data science thought leader and influencer Felipe Flores. Felipe is a global thought leader and influencer in the field of data science and artificial intelligence. He is the founder of Data Futurology – a podcast and events company with more than 10,000 weekly listeners, Head of Data & Technology at Honeysuckle Health and co-organiser of Data Science Melbourne. In this episode we discuss: Felipe’s work at Honeysuckle HealthWhat Honeysuckle Health does and why the company was founded by two large insurance organisationsHow data-driven personalised health care works in practice and the typical outcomes patients seeHow data will be used to drive positive health outcomes in the future, and much more.

In this episode of SaaS Scaled, we’re talking to Steven Schneider. Steven is the CEO of Capitol Canary, a company based in the DC area that works with government affairs teams to help give them the edge to win their policy battles.   We chat about the problems that Capitol Canary deals with and Steven’s experience as CEO, including some of the challenges he faced along the way and the various roles he has played. We talk about the difference between addressing existing market needs and creating new markets, and Steven shares his thoughts on when to be innovative and when to play it safe.   We talk about data and the differences between a company with a strong data culture and one without. We also discuss how analytics will continue to play a more important role in businesses and applications, and what this means for the future. Finally, Steven speaks about how Capitol Canary plans to use data as time goes on.

podcast_episode
by Mico Yuk (Data Storytelling Academy) , Derrick Louis (RaceTrac Petroleum; Cumberland Farms/EG America)

Data is a company's most important asset, so why aren't more companies turning their analytics departments into profit centers? In today's episode, my guest took this concept to the next level by bringing a dedicated accountant on staff to manage their analytics profit center. When Derrick Louis says he is focused on the bottom line, he means it! Previously the Executive Director of IT at RaceTrac Petroleum, Derrick currently heads up IT at Cumberland Farms/EG America. As a business-driven IT leader, he harnesses the power of technology, people, and innovation to combat rising costs, create nimble organizations, and drive scalable growth. In today's conversation, Derrick shares some practical tips and advice that will inspire you to think differently about how you work with your customers, especially when it comes to being financially responsible. We touch on why you should be quantifying how your analytics teamwork adds to your bottom line, how to know when it's time to become a profit center, and the value of keeping track of every dollar you save your org by bringing an accountant in, and so much more! For data leaders that are struggling to show their value to their org or aren't sure where to start on creating a profit center, this episode is for you!   In this episode, you'll learn: [0:15:49] How to convince your company to shift from a cost center to a profit center. [0:22:40] How effective data leaders deal with pushback from their leadership. [0:27:20] Derrick's advice for departments that want to become profit centers, but don't know where to start. [0:33:32] What led Derrick to bring an accountant into his analytics team. [0:35:28] Some of the specialized skills that an accountant brings to the table. [0:42:56] Different types of reporting that you need, including the value of user spend. [0:44:03] Why Derrick says that the accountant ultimately "paid for himself." For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/84   Enjoyed the Show?  Please leave us a review on iTunes.

Summary The predominant pattern for data integration in the cloud has become extract, load, and then transform or ELT. Matillion was an early innovator of that approach and in this episode CTO Ed Thompson explains how they have evolved the platform to keep pace with the rapidly changing ecosystem. He describes how the platform is architected, the challenges related to selling cloud technologies into enterprise organizations, and how you can adopt Matillion for your own workflows to reduce the maintenance burden of data integration workflows.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit http://www.dataengineeringpodcast.com/montecarlo?utm_source=rss&utm_medium=rss to learn more. Your host is Tobias Macey and today I’m interviewing Ed Thompson about Matillion, a cloud-native data integration platform for accelerating your time to analytics

Interview

Introduction How did you get involved in the area of data management?

Summary Building a data platform is an iterative and evolutionary process that requires collaboration with internal stakeholders to ensure that their needs are being met. Yotpo has been on a journey to evolve and scale their data platform to continue serving the needs of their organization as it increases the scale and sophistication of data usage. In this episode Doron Porat and Liran Yogev explain how they arrived at their current architecture, the capabilities that they are optimizing for, and the complex process of identifying and evaluating new components to integrate into their systems. This is an excellent exploration of the decisions and tradeoffs that need to be made while building such a complex system.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog Your host is Tobias Macey and today I’m interviewing Doron Porat and Liran Yogev about their experiences designing and implementing a self-serve data platform at Yotpo

Interview

Introduction How did you get involved in the area of data management? Can you describe what Yotpo is and the role that data plays in the organization? What are the core data types and sources that you are working with?

What kinds of data assets are being produced and how do those get consumed and re-integrated into the business?

What are the user personas that you are supporting and what are the interfaces that they are comfortable interacting with?

What is the size of your team and how is it structured?

You recently posted about the current architecture of your data platform. What was the starting point on your platform journey?

What did the early stages of feature and platform evolution look like? What was the catalyst for making a concerted effort to integrate your systems into a cohesive platform?

What was the scope and directive of the project for building a platform?

What are the metrics and capabilities that you are optimizing for in the structure of your data platform? What are the organizational or regulatory constraints that you needed to account for?

What are some of the early decisions that affected your available choices in later stages of the project? What does the current state of your architecture look like?

How long did it take to get to where you are today?

What were the factors that you considered in the various build vs. buy decisions?

How did you manage cost modeling to understand the true savings on either side of that decision?

If you were to start from scratch on a new data platform today what might you do differently? What are the decisions that proved helpful in the later stages of your platform development? What are the most interesting, innovative, or unexpected ways that you have seen your platform used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on designing and implementing your platform? What do you have planned for the future of your platform infrastructure?

Contact Info

Doron

LinkedIn

Liran

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

Yotpo

Data Platform Architecture Blog Post

Greenplum Databricks Metorikku Apache Hive CDC == Change Data Capture Debezium

Podcast Episode

Apache Hudi

Podcast Episode

Upsolver

Podcast Episode

Spark PrestoDB Snowflake

Podcast Episode

Druid Rockset

Podcast Episode

dbt

Podcast Episode

Acryl

Podcast Episode

Atlan

Podcast Episode

OpenLineage

Podcast Episode

Okera Shopify Data Warehouse Episode Redshift Delta Lake

Podcast Episode

Iceberg

Podcast Episode

Outbox Pattern Backstage Roadie Nomad Kubernetes Deequ Great Expectations

Podcast Episode

LakeFS

Podcast Episode

2021 Recap Episode Monte Carlo

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

a…

podcast_episode
by Nouriel Roubini (New York University Stern School of Business) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Nouriel Roubini, Professor Emeritus of Economics and International Business at New York University Stern School of Business, joins the podcast to discuss the U.S. and Global economic outlook and the threats of stagflation.  Full episode transcript For more from Nouriel Roubini, follow him on Twitter @Nouriel.  Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Artificial Intelligence with Power BI

Discover how to enhance your data analysis with 'Artificial Intelligence with Power BI,' a resource designed to teach you how to leverage Power BI's AI capabilities. You will learn practical methods for enriching your analytics with forecasting, anomaly detection, and machine learning, equipping you to create intelligent, insightful BI reports. What this Book will help me do Learn how to apply AI capabilities such as forecasting and anomaly detection to enrich your reports and drive actionable insights. Explore data preparation techniques optimized for AI, ensuring your datasets are structured for advanced analytics. Develop skills to integrate Azure Machine Learning and Cognitive Services into Power BI, expanding your analytical toolset. Understand how to build Q&A interfaces and integrate Natural Language Processing into your BI solutions. Gain expertise in training and deploying your own machine learning models to achieve tailored insights and predictive analytics. Author(s) None Diepeveen is an experienced data analyst and Power BI expert with a passion for making advanced analytics accessible to professionals. With years of hands-on experience working in the data analytics field, they deliver insights using intuitive, practical approaches through clear and engaging tutorials. Who is it for? This book is ideal for data analysts and BI developers who aim to expand their analytics capabilities with AI. Readers should already be familiar with Power BI and are looking for a resource to teach them how to incorporate predictive and advanced AI techniques into their reporting workflow. Whether you're seeking to gain a professional edge or enhance your organization's data storytelling and insights, this guide is perfect for you.

IBM z16 Technical Introduction

This IBM® Redbooks® publication introduces the latest member of the IBM Z® platform that is built with the IBM Telum processor: the IBM z16 server. The IBM Z platform is recognized for its security, resiliency, performance, and scale. It is relied on for mission-critical workloads and as an essential element of hybrid cloud infrastructures. The IBM z16 server adds capabilities and value with innovative technologies that are needed to accelerate the digital transformation journey. This book explains how the IBM z16 server uses innovations and traditional IBM Z strengths to satisfy the growing demand for cloud, analytics, and a more flexible infrastructure. With the IBM z16 servers as the base, applications can run in a trusted, reliable, and secure environment that improves operations and lessens business risk.

The Tableau Workshop

The Tableau Workshop offers a comprehensive, hands-on guide to mastering data visualization with Tableau. Through practical exercises and engaging examples, you will learn how to prepare, analyze, and visualize data to uncover valuable business insights. By completing this book, you will confidently understand the key concepts and tools needed to create impactful data-driven visual stories. What this Book will help me do Master the use of Tableau Desktop and Tableau Prep for data visualization tasks. Gain the ability to prepare and process data for effective analysis. Learn to choose and utilize the most appropriate chart types for different scenarios. Develop the skills to create interactive dashboards that engage stakeholders. Understand how to perform calculations to extract deeper insights from data. Author(s) Sumit Gupta, None Pinto, Shweta Savale, JC Gillet None, and None Cherven are experts in the field of data analytics and visualization. With diverse backgrounds in business intelligence and hands-on experience with industry tools like Tableau, they bring valuable insights to this book. Their collaborative effort offers practical, real-world knowledge tailored to help learners excel in Tableau and data visualization. With their passion for making technical concepts accessible, they guide readers step by step through their learning journey. Who is it for? This book is ideal for professionals, analysts, or students looking to delve into the world of data visualization with Tableau. Whether you're a complete beginner seeking foundational knowledge, or an intermediate user aiming to refine your skills, this book offers the practical insights you need. It's designed for those who want to master Tableau tools, explore meaningful data insights, and effectively communicate them through engaging dashboards and stories.

Automated decisions, personalised customer and employee experiences and data-driven decision-making are at the core of digital transformation in the 2020s. In other words, data is eating the world and all modern leaders must know how to use data, analytics and advanced data science to power their organisations. So, how do organisations set themselves up for success in a data-driven world, technically and culturally? To answer this question and many more relating to data-driven innovation and intrapreneurship, I recently spoke to Felipe Flores. Felipe is a global thought leader and influencer in the field of data science and artificial intelligence. He is the founder of Data Futurology – a podcast and events company with more than 10,000 weekly listeners, Head of Data & Technology at Honeysuckle Health and co-organiser of Data Science Melbourne. In this first episode of a two-part series of Leaders of Analytics featuring Felipe, we discuss: Felipe’s journey from a young backpacker to a global data science executiveWhat Data Futurology does and why Felipe started itHow to innovate with data scienceThe biggest trends in data science in the next 1-3 yearsWhat the perfect data-driven organisation looks like and much more.Felipe on LinkedIn: https://www.linkedin.com/in/felipe-flores-analytics/ Data Futurology: https://www.datafuturology.com/ Honeysuckle Health: https://www.honeysucklehealth.com.au/

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet , Emily Mandel (Moody's Analytics)

Colleague, Emily Mandel, economist at Moody's Analytics, moderates this Q&A session to get to know Mark, Ryan, and Cris a little better. Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Summary A huge amount of effort goes into modeling and shaping data to make it available for analytical purposes. This is often due to the need to simplify the final queries so that they are performant for visualization or limited exploration. In order to cut down the level of effort involved in making data usable, Matthew Halliday and his co-founders created Incorta as an end-to-end, in-memory analytical engine that removes barriers to insights on your data. In this episode he explains how the system works, the use cases that it empowers, and how you can start using it for your own analytics today.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit http://www.dataengineeringpodcast.com/montecarlo?utm_source=rss&utm_medium=rss to learn more. Your host is Tobias Macey and today I’m interviewing Matthew Halliday about Incorta, an in-memory, unified data and analytics platform as a service

Interview

Introduction How did you g

Summary There are very few tools which are equally useful for data engineers, data scientists, and machine learning engineers. WhyLogs is a powerful library for flexibly instrumenting all of your data systems to understand the entire lifecycle of your data from source to productionized model. In this episode Andy Dang explains why the project was created, how you can apply it to your existing data systems, and how it functions to provide detailed context for being able to gain insight into all of your data processes.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog Your host is Tobias Macey and today I’m interviewing Andy Dang about powering observability of AI systems with the whylogs data logging library

Interview

Introduction How did you get involved in the area of data management? Can you describe what Whylabs is and the story behind it? How is "data logging" differentiated from logging for the purpose of debugging and observability of software logic? What are the use cases that you are aiming to support with Whylogs?

How does it compare to libraries and services like Great Expectations/Monte Carlo/Soda Data/Datafold etc.

Can you describe how Whylogs is implemented?

How have the design and goals of the project changed or evolved since you started working on it?

How do you maintain feature parity between the Python and Java integrations? How do you structure the log events and metadata to provide detail and context for data applications?

How does that structure support aggregation and interpretation/analysis of the log information?

What is the process for integrating Whylogs into an existing project?

Once you ha

We talked about:

Christopher’s background The essence of DataOps Also known as Agile Analytics Operations or DevOps for Data Science Defining processes and automating them (defining “done” and “good”) The balance between heroism and fear (avoiding deferred value) The Lean approach Avoiding silos The 7 steps to DataOps Wanting to become replaceable DataOps is doable Testing tools DataOps vs MLOps The Head Chef at Data Kitchen What’s grilling at Data Kitchen? The DataOps Cookbook

Links:

DataOps Manifesto website: https://dataopsmanifesto.org/en/ DataOps Cookbook: https://dataops.datakitchen.io/pf-cookbook Recipes for DataOps Success: https://dataops.datakitchen.io/pf-recipes-for-dataops-success DataOps Certification Course: https://info.datakitchen.io/training-certification-dataops-fundamentals DataOps Blog: https://datakitchen.io/blog/ DataOps Maturity Model: https://datakitchen.io/dataops-maturity-model/ DataOps Webinars: https://datakitchen.io/webinars/

Join DataTalks.Club: https://datatalks.club/slack.html  

Our events: https://datatalks.club/events.html

podcast_episode
by Jared Bernstein (White House Council of Economic Advisers) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Jared Bernstein, Member of the White House Council of Economic Advisers, joins the podcast to discuss the state of the U.S. economy, including the labor market, inflation, housing and recession risks. Full episode transcript  For more from Jared Bernstein, follow him on Twitter @econjared46.  Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Most recently leading a data engineering team at Perpay, Sarah has built and managed data platforms end to end by working closely with internal engineering, product, and operational teams. She recently left her role to pursue a wide variety of endeavors, including writing on her Substack (https://sarahsnewsletter.substack.com/). In this conversation with Tristan and Julia, Sarah dives into how configuration-as-code can automate away data work, why you might want to consider adding a data lake to your architecture, and how those looking to build a self-serve data culture can look to self-serve frozen yogurt shops for inspiration. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

Microsoft Power BI Performance Best Practices

"Microsoft Power BI Performance Best Practices" is a thorough guide to mastering efficiently operating Power BI solutions. This book walks you through optimizing every layer of a Power BI project, from data transformations to architecture, equipping you with the ability to create robust and scalable analytics solutions. What this Book will help me do Understand how to set realistic performance goals for Power BI projects and implement ongoing performance monitoring. Apply effective architectural and configuration strategies to improve Power BI solution efficiency. Learn practices for constructing and optimizing data models and implementing Row-Level Security effectively. Utilize tools like DAX Studio and VertiPaq Analyzer to detect and resolve common performance bottlenecks. Gain deep knowledge of Power BI Premium and techniques for handling large-scale data solutions using Azure. Author(s) Bhavik Merchant is a recognized expert in business intelligence and analytics solutions. With extensive experience in designing and implementing Power BI solutions across industries, he brings a pragmatic approach to solving performance issues in Power BI. Bhavik's writing style reflects his passion for teaching, ensuring readers gain practical knowledge they can directly apply to their work. Who is it for? This book is designed for data analysts, BI developers, and data professionals who have foundational knowledge of Power BI and aim to elevate their skills to construct high-performance analytics solutions. It is particularly suited to individuals seeking guidance on best practices and tools for optimizing Power BI applications.

In this episode of SaaS Scaled, we’re talking to Daniel Saks. Daniel is the president and co-founder of AppDirect, a platform that allows businesses to access all the tools and capabilities needed to thrive in a rapidly evolving digital world.   Daniel talks about how AppDirect got started, the problems it solves, and the story so far. We talk about the growth of the digital economy in recent decades and the changes that Daniel has noticed over time.   We talk about the rise of SaaS companies, and what the future holds as some companies move from direct to indirect selling, and single-channel to multi-channel. Daniel shares some of the various factors that could bring down the cost of sales for SaaS companies.   Finally, Daniel talks about his own podcast and shares one of his favorite books.   This episode is brought to you by Qrvey The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com. Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

When we talk about data and AI ethics, we typically view this through a privacy lens. That is, someone’s personal data has either been compromised and ended up in the wrong hands, or personal data is used to manipulate or create adverse outcomes for individuals or minority groups. These factors are still fundamental to AI ethics, but there is now also a big focus on the broader social impact of AI, including human rights, data privacy and using AI for good. Enter the concept of data pollution. The data pollution paradigm describes how the use and intentional or unintentional sharing of personal data can create social harm – not just private harm affecting only the individuals included in the dataset. To understand the concept of data pollution and its impact on individual privacy and society as a whole, I recently spoke to Gianclaudio Malgieri. Gianclaudio is Associate Professor of Law and Technology at the Augmented Law Institute of EDHEC Business School (Lille, France), Co-Director of the Brussels Privacy Hub, lecturer in IP and Data Protection and an expert in privacy, data protection, intellectual property, law and technology, EU law and human rights. In this episode of Leaders of Analytics, we discuss: The evolution of data and AI ethics over the last 20 yearsWhy data protection is so important to the future of our society as we know itWhat data pollution is and why we should care about itWhat we can do to create data sustainabilityWhat business leaders, legislators and legal professionals can do to deal with AI sustainability issues, and much more.Gianclaudio's website: https://www.gianclaudiomalgieri.eu/ Gianclaudio on LinkedIn: https://www.linkedin.com/in/gianclaudio-malgieri-410718a1/ Brussels Privacy Hub: https://brusselsprivacyhub.eu/