talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

podcast_episode
by Dante DeAntonio (Moody's Analytics) , Dr. Erica Groshen (Cornell University—ILR and Research Fellow at the Upjohn Institute for Employment Research) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Former Bureau of Labor Statistics Commissioner Erica Groshen joins Mark, Cris, and Dante to cover a wide range of topics, including a somber discussion about the recent firing of the current BLS commissioner. Erica provides key insights into the role that BLS commissioners play in the day-to-day publication of economic data, as well as the longer-term challenges facing BLS and other federal statistical agencies. She also weighs in on the recent revisions to employment data that have garnered much attention and provides a thorough explanation of why revisions happen and the tradeoff between timeliness and accuracy.   Guests : Dr. Erica Groshen, Senior Economic Advisor at Cornell University—ILR and Research Fellow at the Upjohn Institute for Employment Research and Dante DeAntonio, Senior Director of Economic Research, Moody's Analytics Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Why is meaningful innovation so hard in insurance, and what can be done about it? In this episode of Hub & Spoken, host Jason Foster is joined by John Turner, a global underwriting leader and expert in life and health insurance, to explore the innovation imperative in one of the world's most traditional (and risk-averse) industries. They unpack the complex cultural, structural, and regulatory challenges that make change difficult in insurance, from siloed teams and outdated processes to over-engineered tech solutions that miss the mark. But they also spotlight the opportunities: from automation that enhances the customer journey to cross-functional collaboration that drives real transformation. 💡 Key talking points include: Why conservative cultures make innovation harder — but not impossible The hidden flaws in traditional underwriting and risk selection How data, automation and behaviour change can unlock new growth The clash between insurers and insurtechs — and how to bridge it What truly customer-led innovation looks like Whether you're in insurance, financial services, or just trying to drive change in a legacy-heavy environment, this episode is packed with ideas, reflections, and real-world experience you can learn from. ****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Brought to You By: •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Sonar —  Code quality and code security for ALL code. — In this episode of The Pragmatic Engineer, I sit down with Peter Walker, Head of Insights at Carta, to break down how venture capital and startups themselves are changing. We go deep on the numbers: why fewer companies are getting funded despite record VC investment levels, how hiring has shifted dramatically since 2021, and why solo founders are on the rise even though most VCs still prefer teams. We also unpack the growing emphasis on ARR per FTE, what actually happens in bridge and down rounds, and why the time between fundraising rounds has stretched far beyond the old 18-month cycle. We cover what all this means for engineers: what to ask before joining a startup, how to interpret valuation trends, and what kind of advisor roles startups are actually looking for. If you work at a startup, are considering joining one, or just want a clearer picture of how venture-backed companies operate today, this episode is for you. — Timestamps (00:00) Intro (01:21) How venture capital works and the goal of VC-backed startups (03:10) Venture vs. non-venture backed businesses  (05:59) Why venture-backed companies prioritize growth over profitability (09:46) A look at the current health of venture capital  (13:19) The hiring slowdown at startups (16:00) ARR per FTE: The new metric VCs care about (21:50) Priced seed rounds vs. SAFEs  (24:48) Why some founders are incentivized to raise at high valuations (29:31) What a bridge round is and why they can signal trouble (33:15) Down rounds and how optics can make or break startups  (36:47) Why working at startups offers more ownership and learning (37:47) What the data shows about raising money in the summer (41:45) The length of time it takes to close a VC deal (44:29) How AI is reshaping startup formation, team size, and funding trends (48:11) Why VCs don’t like solo founders (50:06) How employee equity (ESOPs) work (53:50) Why acquisition payouts are often smaller than employees expect (55:06) Deep tech vs. software startups: (57:25) Startup advisors: What they do, how much equity they get (1:02:08) Why time between rounds is increasing and what that means (1:03:57) Why it’s getting harder to get from Seed to Series A  (1:06:47) A case for quitting (sometimes)  (1:11:40) How to evaluate a startup before joining as an engineer (1:13:22) The skills engineers need to thrive in a startup environment (1:16:04) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode:

— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working. 

In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.  

By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI. 

Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:

Monitor – enabling appropriate transparency into AI agent behavior and performance Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed

…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework. 

Highlights / Skip to:

00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.  01:27 The importance of trust in AI systems and how it is linked to user adoption 03:06 Cultural shifts, AI hype, and growing AI skepticism 04:13  Human centered design practices for agentic AI   06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation 11:32 Measuring success of agentic applications with UX outcomes 15:26 Introducing the first two of five MIRRR framework control points: 16:29 M is for Monitor; understanding the agent’s “performance,” and the right level of transparency end users need, from individual tasks to aggregate views  20:29 I is for Interrupt; when and why users may need to stop the agent—and what happens next

28:02 Conclusion and next steps

7 essential habits helped me transition into data analytics (even without prior experience), and I'm sharing them in today's episode. If you're transitioning into data analytics, I've also created a free tool to help you monitor and track your progress. FREE 7 Habits Tracker here: http://datacareerjumpstart.com/7Habits ⚡Start designing today with Gamma for free ➡ https://landadatajob.com/gamma Here's your next watch! Stop Doing Random Data Courses - Read These Books Instead https://youtu.be/Ea9a-OM3Kfw?si=p6C2Vtknztv2ubBb 💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator

⌚ TIMESTAMPS 00:00 Introduction 00:06 - Habit 1:  I built a real-world project every single month. 02:23 - Showcase your projects with Gamma! 03:18 - Habit 2: I read five pages a day. 05:24 - Habit 3:  I started seeing the real applications of data. 06:32 - Habit 4:  I started sharing my learnings on LinkedIn. 08:54 - Habit 5:  I applied for jobs consistently, not when I just felt ready. 09:50 - Habit 6:  I sent 1 to 3 cold DMs every week. 11:13 - Habit 7:  I started attending data events every month. 12:38 - FREE Habit Tracker

🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Summary In this episode of the Data Engineering Podcast Andy Warfield talks about the innovative functionalities of S3 Tables and Vectors and their integration into modern data stacks. Andy shares his journey through the tech industry and his role at Amazon, where he collaborates to enhance storage capabilities, discussing the evolution of S3 from a simple storage solution to a sophisticated system supporting advanced data types like tables and vectors crucial for analytics and AI-driven applications. He explains the motivations behind introducing S3 Tables and Vectors, highlighting their role in simplifying data management and enhancing performance for complex workloads, and shares insights into the technical challenges and design considerations involved in developing these features. The conversation explores potential applications of S3 Tables and Vectors in fields like AI, genomics, and media, and discusses future directions for S3's development to further support data-driven innovation.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementTired of data migrations that drag on for months or even years? What if I told you there's a way to cut that timeline by up to 6x while guaranteeing accuracy? Datafold's Migration Agent is the only AI-powered solution that doesn't just translate your code; it validates every single data point to ensure perfect parity between your old and new systems. Whether you're moving from Oracle to Snowflake, migrating stored procedures to dbt, or handling complex multi-system migrations, they deliver production-ready code with a guaranteed timeline and fixed price. Stop burning budget on endless consulting hours. Visit dataengineeringpodcast.com/datafold to book a demo and see how they're turning months-long migration nightmares into week-long success stories.Your host is Tobias Macey and today I'm interviewing Andy Warfield about S3 Tables and VectorsInterview IntroductionHow did you get involved in the area of data management?Can you describe what your goals are with the Tables and Vector features of S3?How did the experience of building S3 Tables inform your work on S3 Vectors?There are numerous implementations of vector storage and search. How do you view the role of S3 in the context of that ecosystem?The most directly analogous implementation that I'm aware of is the Lance table format. How would you compare the implementation and capabilities of Lance with what you are building with S3 Vectors?What opportunity do you see for being able to offer a protocol compatible implementation similar to the Iceberg compatibility that you provide with S3 Tables?Can you describe the technical implementation of the Vectors functionality in S3?What are the sources of inspiration that you looked to in designing the service?Can you describe some of the ways that S3 Vectors might be integrated into a typical AI application?What are the most interesting, innovative, or unexpected ways that you have seen S3 Tables/Vectors used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on S3 Tables/Vectors?When is S3 the wrong choice for Iceberg or Vector implementations?What do you have planned for the future of S3 Tables and Vectors?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links S3 TablesS3 VectorsS3 ExpressParquetIcebergVector IndexVector DatabasepgvectorEmbedding ModelRetrieval Augmented GenerationTwelveLabsAmazon BedrockIceberg REST CatalogLog-Structured Merge TreeS3 MetadataSentence TransformerSparkTrinoDaftThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Fundamentals of Metadata Management

Whether it's to adhere to regulations, access markets by meeting specific standards, or devise data analytics and AI strategies, companies today are busy implementing metadata repositories—metadata tools about the IT, data, information, and knowledge in your company. Until now, most of these repositories have been implemented in isolation from one another, but that practice lies at the core of problems with data management in many companies today. Author Ole Olesen-Bagneux, chief evangelist at Actian, shows you how to masterfully manage your metadata repositories by properly coordinating them. That requires a data discovery team to increase insights for all key players in enterprise data management, from the CIO and CDO to enterprise and data architects. Coordinating these repositories will help you and your organization democratize data and excel at data management. This book shows you how. Learn what metadata repositories are and what they do Explore which data to represent in these repositories Set up a data discovery team to make data searchable Learn how to manage and coordinate repositories in a meta grid Increase innovation by setting up a functional data marketplace Make information security and data protection more robust Gain a deeper understanding of your company IT landscape Activate real enterprise architecture based on evidence

What does it mean to be agentic? Is there a spectrum of agency?  In this episode of The Analytics Engineering Podcast, Tristan Handy talks to Sean Falconer, senior director of AI strategy at Confluent, about AI agents. They discuss what truly makes software "agentic," where agents are successfully being deployed, and how to conceptualize and build agents within enterprise infrastructure.  Sean shares practical ideas about the changing trends in AI, the role of basic models, and why agents may be better for businesses than for consumers. This episode will give you a clear, practical idea of how AI agents can change businesses, instead of being a vague marketing buzzword. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

podcast_episode
by Dante DeAntonio (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

The Inside Economics team turned lugubrious in this week’s episode. Given this week’s data dump showing that inflation is uncomfortably high and accelerating, and the job market and broader economy are struggling, it's hard not to be. They also consider what it all means for the Fed, which is in an increasingly difficult position, and prospects that the economy will fall off the narrow tight rope it is on, into recession.   Guest: Dante DeAntonio, Senior Director of Economic Research, Moody's Analytics Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn Questions or Comments, please email us at [email protected]. We would love to hear from you. 

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

--- How can data storytelling improve health outcomes and save lives? The Lung Health Dashboard offers one example.

--- Today’s episode explores how effective data storytelling connects the public with life-saving research, using the Lung Health Dashboard as an example. This project a collaboration between GovEx and the Johns Hopkins BREATHE Center, which promotes the science and medicine of lung health by interfacing with the community. The dashboard seeks to overcome the challenges of data communication through dynamic, “scrollytelling” visuals to provide research findings to viewers in a relatable perspective.

--- We’re joined by Meredith McCormack, Director of the Pulmonary & Critical Care Medicine Division of Johns Hopkins Medicine and Director of the BREATHE Center; Kirsten Koehler, a professor in the Department of Environmental Health and Engineering at the Bloomberg School of Public Health and Deputy Director of the BREATHE Center; and Mary Conway Vaughan, Deputy Director of Research and Analytics here at GovEx.

--- Learn more about the BREATHE Center --- View the Lung Health Dashboard --- Learn more about GovEx --- Fill out our listener survey

Thinking about transitioning into analytics but worried about starting over? Andrew Madson has been in your shoes. Once a compliance executive, Andrew successfully pivoted into analytics and now is a Professor of analytics at five universities. In this episode, we dive deep into what it really takes to make a career shift when you're already successful. Don't miss this conversation packed with real insights for professionals considering a move into analytics! What You'll Learn: How to handle the fear of leaving a high-paying career for something new Why transitioning is different from starting fresh—and how to use your experience as a superpower Bootcamp vs. Master's—what's worth your time and money? The must-have skills and topics when evaluating an analytics education   To hear even more from Andrew, you can check out his YouTube channel Follow Andrew on LinkedIn!   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Jumpstart Snowflake: A Step-by-Step Guide to Modern Cloud Analytics

This book is your guide to the modern market of data analytics platforms and the benefits of using Snowflake, the data warehouse built for the cloud. As organizations increasingly rely on modern cloud data platforms, the core of any analytics framework—the data warehouse—is more important than ever. This updated 2nd edition ensures you are ready to make the most of the industry’s leading data warehouse. This book will onboard you to Snowflake and present best practices for deploying and using the Snowflake data warehouse. The book also covers modern analytics architecture, integration with leading analytics software such as Matillion ETL, Tableau, and Databricks, and migration scenarios for on-premises legacy data warehouses. This new edition includes expanded coverage of SnowPark for developing complex data applications, an introduction to managing large datasets with Apache Iceberg tables, and instructions for creating interactive data applications using Streamlit, ensuring readers are equipped with the latest advancements in Snowflake's capabilities. What You Will Learn Master key functionalities of Snowflake Set up security and access with cluster Bulk load data into Snowflake using the COPY command Migrate from a legacy data warehouse to Snowflake Integrate the Snowflake data platform with modern business intelligence (BI) and data integration tools Manage large datasets with Apache Iceberg Tables Implement continuous data loading with Snowpipe and Dynamic Tables Who This Book Is For Data professionals, business analysts, IT administrators, and existing or potential Snowflake users

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !! Aperte o play e ouça agora, o Data Hackers News dessa semana ! Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Acesse os links: ⁠Inscrições do Data Hackers Challenge 2025⁠ ⁠Live Zoho: Decisões Baseadas em Dados: Aplicando Machine Learning com o Zoho Analytics Conheça nossos comentaristas do Data Hackers News: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Paulo Vasconcellos ⁠Matérias/assuntos comentados: Demais canais do Data Hackers: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Microsoft Fabric Analytics Engineer Associate Certification Companion: Preparation for DP-600 Microsoft Certification

As organizations increasingly leverage Microsoft Fabric to unify their data engineering, analytics, and governance strategies, the role of the Fabric Analytics Engineer has become more crucial than ever. This book equips readers with the knowledge and hands-on skills required to excel in this domain and pass the DP-600 certification exam confidently. This book covers the entire certification syllabus with clarity and depth, beginning with an overview of Microsoft Fabric. You will gain an understanding of the platform’s architecture and how it integrates with data and AI workloads to provide a unified analytics solution. You will then delve into implementing a data warehouse in Microsoft Fabric, exploring techniques to ingest, transform, and store data efficiently. Next, you will learn how to work with semantic models in Microsoft Fabric, enabling them to create intuitive, meaningful data representations for visualization and reporting. Then, you will focus on administration and governance in Microsoft Fabric, emphasizing best practices for security, compliance, and efficient management of analytics solutions. Lastly, you will find detailed practice tests and exam strategies along with supplementary materials to reinforce key concepts. After reading the book, you will have the background and capability to learn the skills and concepts necessary both to pass the DP-600 exam and become a confident Fabric Analytics Engineer. What You Will Learn A complete understanding of all DP-600 certification exam objectives and requirements Key concepts and terminology related to Microsoft Fabric Analytics Step-by-step preparation for successfully passing the DP-600 certification exam Insights into exam structure, question patterns, and strategies for tackling challenging sections Confidence in demonstrating skills validated by the Microsoft Certified: Fabric Analytics Engineer Associate credential Who This Book Is For ​​​​​​​Data engineers, analysts, and professionals with some experience in data engineering or analytics, seeking to expand their knowledge of Microsoft Fabric

Statistics Every Programmer Needs

Put statistics into practice with Python! Data-driven decisions rely on statistics. Statistics Every Programmer Needs introduces the statistical and quantitative methods that will help you go beyond “gut feeling” for tasks like predicting stock prices or assessing quality control, with examples using the rich tools of the Python ecosystem. Statistics Every Programmer Needs will teach you how to: Apply foundational and advanced statistical techniques Build predictive models and simulations Optimize decisions under constraints Interpret and validate results with statistical rigor Implement quantitative methods using Python In this hands-on guide, stats expert Gary Sutton blends the theory behind these statistical techniques with practical Python-based applications, offering structured, reproducible, and defensible methods for tackling complex decisions. Well-annotated and reusable Python code listings illustrate each method, with examples you can follow to practice your new skills. About the Technology Whether you’re analyzing application performance metrics, creating relevant dashboards and reports, or immersing yourself in a numbers-heavy coding project, every programmer needs to know how to turn raw data into actionable insight. Statistics and quantitative analysis are the essential tools every programmer needs to clarify uncertainty, optimize outcomes, and make informed choices. About the Book Statistics Every Programmer Needs teaches you how to apply statistics to the everyday problems you’ll face as a software developer. Each chapter is a new tutorial. You’ll predict ultramarathon times using linear regression, forecast stock prices with time series models, analyze system reliability using Markov chains, and much more. The book emphasizes a balance between theory and hands-on Python implementation, with annotated code and real-world examples to ensure practical understanding and adaptability across industries. What's Inside Probability basics and distributions Random variables Regression Decision trees and random forests Time series analysis Linear programming Monte Carlo and Markov methods and much more About the Reader Examples are in Python. About the Author Gary Sutton is a business intelligence and analytics leader and the author of Statistics Slam Dunk: Statistical analysis with R on real NBA data. Quotes A well-organized tour of the statistical, machine learning and optimization tools every data science programmer needs. - Peter Bruce, Author of Statistics for Data Science and Analytics Turns statistics from a stumbling block into a superpower. Clear, relevant, and written with a coder’s mindset! - Mahima Bansod, LogicMonitor Essential! Stats and modeling with an emphasis on real-world system design. - Anupam Samanta, Google A great blend of theory and practice. - Ariel Andres, Scotia Global Asset Management

Summary In this episode of the Data Engineering Podcast Akshay Agrawal from Marimo discusses the innovative new Python notebook environment, which offers a reactive execution model, full Python integration, and built-in UI elements to enhance the interactive computing experience. He discusses the challenges of traditional Jupyter notebooks, such as hidden states and lack of interactivity, and how Marimo addresses these issues with features like reactive execution and Python-native file formats. Akshay also explores the broader landscape of programmatic notebooks, comparing Marimo to other tools like Jupyter, Streamlit, and Hex, highlighting its unique approach to creating data apps directly from notebooks and eliminating the need for separate app development. The conversation delves into the technical architecture of Marimo, its community-driven development, and future plans, including a commercial offering and enhanced AI integration, emphasizing Marimo's role in bridging the gap between data exploration and production-ready applications.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementTired of data migrations that drag on for months or even years? What if I told you there's a way to cut that timeline by up to 6x while guaranteeing accuracy? Datafold's Migration Agent is the only AI-powered solution that doesn't just translate your code; it validates every single data point to ensure perfect parity between your old and new systems. Whether you're moving from Oracle to Snowflake, migrating stored procedures to dbt, or handling complex multi-system migrations, they deliver production-ready code with a guaranteed timeline and fixed price. Stop burning budget on endless consulting hours. Visit dataengineeringpodcast.com/datafold to book a demo and see how they're turning months-long migration nightmares into week-long success stories.Your host is Tobias Macey and today I'm interviewing Akshay Agrawal about Marimo, a reusable and reproducible Python notebook environmentInterview IntroductionHow did you get involved in the area of data management?Can you describe what Marimo is and the story behind it?What are the core problems and use cases that you are focused on addressing with Marimo?What are you explicitly not trying to solve for with Marimo?Programmatic notebooks have been around for decades now. Jupyter was largely responsible for making them popular outside of academia. How have the applications of notebooks changed in recent years?What are the limitations that have been most challenging to address in production contexts?Jupyter has long had support for multi-language notebooks/notebook kernels. What is your opinion on the utility of that feature as a core concern of the notebook system?Beyond notebooks, Streamlit and Hex have become quite popular for publishing the results of notebook-style analysis. How would you characterize the feature set of Marimo for those use cases?For a typical data team that is working across data pipelines, business analytics, ML/AI engineering, etc. How do you see Marimo applied within and across those contexts?One of the common difficulties with notebooks is that they are largely a single-player experience. They may connect into a shared compute cluster for scaling up execution (e.g. Ray, Dask, etc.). How does Marimo address the situation where a data platform team wants to offer notebooks as a service to reduce the friction to getting started with analyzing data in a warehouse/lakehouse context?How are you seeing teams integrate Marimo with orchestrators (e.g. Dagster, Airflow, Prefect)?What are some of the most interesting or complex engineering challenges that you have had to address while building and evolving Marimo?\What are the most interesting, innovative, or unexpected ways that you have seen Marimo used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Marimo?When is Marimo the wrong choice?What do you have planned for the future of Marimo?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links MarimoJupyterIPythonStreamlitPodcast.init EpisodeVector EmbeddingsDimensionality ReductionKagglePytestPEP 723 script dependency metadataMatLabVisicalcMathematicaRMarkdownRShinyElixir LivebookDatabricks NotebooksPapermillPluto - Julia NotebookHexDirected Acyclic Graph (DAG)Sumble Kaggle founder Anthony Goldblum's startupRayDaskJupytextnbdevDuckDBPodcast EpisodeIcebergSupersetjupyter-marimo-proxyJupyterHubBinderNixAnyWidgetJupyter WidgetsMatplotlibAltairPlotlyDataFusionPolarsMotherDuckThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Healthcare AI is rapidly evolving beyond simple diagnostic tools to comprehensive systems that can analyze and predict patient outcomes. With the rise of multimodal AI models that can process everything from medical images to patient records and genetic information, we're entering an era where AI could fundamentally transform how healthcare decisions are made. But how do we ensure these systems maintain patient privacy while still leveraging vast amounts of medical data? What are the technical challenges in building AI that can reason across different types of medical information? And how do we balance the promise of AI-assisted healthcare with the critical role of human medical professionals? Professor Aldo Faisal is Chair in AI & Neuroscience at Imperial College London, with joint appointments in Bioengineering and Computing, and also holds the Chair in Digital Health at the University of Bayreuth. He is the Founding Director of the UKRI Centre for Doctoral Training in AI for Healthcare and leads the Brain & Behaviour Lab and Behaviour Analytics Lab at Imperial’s Data Science Institute. His research integrates machine learning, neuroscience, and human behaviour to develop AI technologies for healthcare. He is among the few engineers globally leading their own clinical trials, with work focused on digital biomarkers and AI-based medical interventions. Aldo serves as Associate Editor for Nature Scientific Data and PLOS Computational Biology, and has chaired major conferences like KDD, NIPS, and IEEE BSN. His work has earned multiple awards, including the $50,000 Toyota Mobility Foundation Prize, and is regularly featured in global media outlets. In the episode, Richie and Aldo explore the advancements in AI for healthcare, including AI's role in diagnostics and operational improvements, the ambitious Nightingale AI project, challenges in handling diverse medical data, privacy concerns, and the future of AI-assisted medical decision-making, and much more. Links Mentioned in the Show: Aldo’s PublicationsConnect with AldoProject: What is Your Heart Rate Telling You?Related Episode: Using Data to Optimize Costs in Healthcare with Travis Dalton and Jocelyn Jiang President/CEO & VP of Data & Decision Science at MultiPlanRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

podcast_episode
by Matt Colyar (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

The 2025 U.S. economy leaves no shortage of topics to explore. This week, the Inside Economics crew tries to touch them all. Mark and Cris, joined by Matt Colyar, discuss growing challenges to Fed independence, recent tariff agreements, financial market exuberance, and a U.S. housing market under significant stress. Finally, the team answers several listener questions and offers their latest recession probabilities and expectations for next week’s slew of important data.  Read the full housing research paper here: https://www.economy.com/bringing-the-housing-shortage-into-sharper-focus Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Thinking about swapping your 9‑to‑5 for client work, but worried that a long German–style notice period will kill your chances?  In this live interview, seven‑year data‑freelance veteran Dimitri walks through his experience of taking his freelance career to the next level.

About the Speaker: Dimitri Visnadi is an independent data consultant with a focus on data strategy. He has been consulting companies leading the marketing data space such as Unilever, Ferrero, Heineken, and Red Bull.

He has lived and worked in 6 countries across Europe in both corporate and startup organizations. He was part of data departments at Hewlett-Packard (HP) and a Google partnered consulting firm where he was working on data products and strategy.

Having received a Masters in Business Analytics with Computer Science from University College London and a Bachelor in Business Administration from John Cabot University, Dimitri still has close ties to academia and holds a mentor position in entrepreneurship at both institutions. 🕒 TIMECODES00:00 Dimitri’s journey from corporate to freelance data specialist05:41 Job tenure trends, tech career shifts, and freelance types10:50 Freelancing challenges, success, and finding clients17:33 Freelance market trends and Dimitri’s job board23:51 Starting points, top freelance skills, and market insights32:48 Building a lifestyle business: scaling and work-life balance45:30 Data Freelancer course and marketing for freelancers48:33 Subscription services and managing client relationships56:47 Pricing models and transitioning advice1:01:02 Notice periods, networking, and risks in freelancing transition 🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/... Check other upcoming events - https://lu.ma/dtc-events LinkedIn - / datatalks-club
Twitter - / datatalksclub
Website - https://datatalks.club/ 🔗 CONNECT WITH DIMITRI Linkedin - https://www.linkedin.com/in/visnadi/