talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q2

Activities

4552 activities · Newest first

To be clear - I'm not saying that analytics and data engineering are a fad. I'm not saying the data teams are doomed to fade away, or that the old fundamentals of data modeling are wrong, or that the urge to quantify everything is a mistake. I'm saying that things seem pretty good, right now. But, you know. Like Charles Schwab constantly says, past performance is no guarantee of future results. So someone else might say all of that in the future - because, as John Maynard Keynes said, in the long run, we are all dead.

For years, data engineering was a story of predictable "pipelines": move data from point A to point B. But AI just hit the reset button on our entire field. Now, we're all staring into the void, wondering what's next. While the fundamentals haven't changed, data remains challenging in the traditional areas of data governance, data management, and data modeling, which still present challenges. Everything else is up for grabs. This talk will cut through the noise and explore the future of data engineering in an AI-driven world. We'll examine how team structures will evolve, why agentic workflows and real-time systems are becoming non-negotiable, and how our focus must shift from building dashboards and analytics to architecting for automated action. The reset button has been pushed. It's time for us to invent the future of our industry.

talk
by Holden Karau (Fight Health Insurance)

In this talk the somewhat biased Apache Spark PMC Holden will explore the times when using Spark is more likely to lead to disappointment and pages than success and promotions. We'll, of course, look at places where Spark can excel but also explore heuristics like if it fits in Excel double check if you need Spark. By using Spark only when it's truly beneficial you can demonstrate that elusive "thought leadership" that always seems to be required for the next level of promotion. We'll explore how some of Spark's largest disadvantages are changing, but also which ones are likely to stick around -- allowing you to seem like you have a magic tech eightball next time someone asks you to design your analytics strategy. Come for a place to sit after lunch and stay for the OOM therapy.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here. •⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out. — Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications. I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is! We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works. — Timestamps (00:00) Intro (02:35) Compilers in the early 2000s (04:48) Why Chris built LLVM (08:24) GCC vs. LLVM (09:47) LLVM at Apple  (19:25) How Chris got support to go open source at Apple (20:28) The story of Swift  (24:32) The process for designing a language  (31:00) Learnings from launching Swift  (35:48) Swift Playgrounds: making coding accessible (40:23) What Swift solved and the technical debt it created (47:28) AI learnings from Google and Tesla  (51:23) SiFive: learning about hardware engineering (52:24) Mojo’s origin story (57:15) Modular’s bet on a two-level stack (1:01:49) Compiler shortcomings (1:09:11) Getting started with Mojo  (1:15:44) How big is Modular, as a company? (1:19:00) AI coding tools the Modular team uses  (1:22:59) What kind of software engineers Modular hires  (1:25:22) A programming language for LLMs? No thanks (1:29:06) Why you should study and understand compilers — The Pragmatic Engineer deepdives relevant for this episode: •⁠ AI Engineering in the real world • The AI Engineering stack • Uber's crazy YOLO app rewrite, from the front seat • Python, Go, Rust, TypeScript and AI with Armin Ronacher • Microsoft’s developer tools roots — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Start with a dataset in Motherduck and build a production-ready analytics app using Omni’s semantic model and APIs. We’ll cover practical data modeling techniques, share lessons learned from building AI features, and walk through how to give AI the context it needs to answer questions accurately. You’ll leave with a working app and the skills to build your next one.

Everyone's trying to make LLMs "accurate." But the real challenge isn't accuracy — it's context. We'll explore why traditional approaches like evals suites or synthetic question sets fall short, and how successful AI systems are built instead through compounding context over time. Hex enables a new workflow for conversational analytics that grows smarter with every interaction. With Hex's Notebook Agent and Threads, business users define the questions that matter while data teams refine, audit, and operationalize them into durable, trusted workflows. In this model, "tests" aren't written in isolation by data teams — they're defined by the business and operationalized through data workflows. The result is a living system of context — not a static set of prompts or tests — that evolves alongside your organization. Join us for a candid discussion on what's working in production AI systems, and get hands-on building context-aware analytical workflows in Hex!

What if your job hunt could run like a data system? In this episode, I share the story of how I used three AI agents — Researcher, Writer, and Reviewer — to rebuild my job search from the ground up. These agents read job descriptions, tailor resumes, and even critique tone and clarity — saving hours every week. But this episode isn’t just about automation. It’s about agency. I’ll talk about rejection, burnout, and the mindset shift that changed everything: treating every rejection as a data point, not a defeat. Whether you’re in tech, analytics, or just tired of the job search grind — this one’s for you. 🔹 Learn how I automated resume tailoring with GPT-4 🔹 Understand how to design AI systems that protect your mental energy 🔹 Discover why “efficiency” means doing less of what drains you 🔹 Hear the emotional story behind building these agents from scratch Join the Discussion (comments hub): https://mukundansankar.substack.com/notesTools I use for my Podcast and Affiliate PartnersRecording Partner: Riverside → Sign up here (affiliate)Host Your Podcast: RSS.com (affiliate )Research Tools: Sider.ai (affiliate)Sourcetable AI: Join Here(affiliate)🔗 Connect with Me:Free Email NewsletterWebsite: Data & AI with MukundanGitHub: https://github.com/mukund14Twitter/X: @sankarmukund475LinkedIn: Mukundan SankarYouTube: Subscribe

Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! I wouldn't try to become a data analyst next here. Here's 4 reasons why and what I'd do instead. 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa ⌚ TIMESTAMPS 00:32 - Reason 1 not to be data scientist 03:22 - Reason 2 not to be data scientist 04:55 - Reason 3 not to be data scientist 07:33 - Reason 4 not to be data scientist 11:28 - What to do instead 🍿 OTHER EPISODES MENTIONED Data Analyst Roadmap: https://datacareerpodcast.com/episode/136-how-i-would-become-a-data-analyst-in-2025-if-i-had-to-start-over-again Get Paid to Learn Data: https://datacareerpodcast.com/episode/137-get-paid-1000s-to-master-data-analytics-skills-in-2025 Get You Master's Paid For (Thomas): https://datacareerpodcast.com/episode/128-meet-the-math-teacher-who-landed-a-data-job-in-60-days-thomas-gresco Get You Master's Paid For (Rachael): https://datacareerpodcast.com/episode/125-how-she-landed-a-business-intelligence-analyst-job-in-less-than-100-days-w-rachael-finch My review of Georgia Tech's Master's: https://datacareerpodcast.com/episode/38-masters-in-data-analytics-from-georgia-tech-is-it-worth-it 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator 🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Data quality and AI reliability are two sides of the same coin in today's technology landscape. Organizations rushing to implement AI solutions often discover that their underlying data infrastructure isn't prepared for these new demands. But what specific data quality controls are needed to support successful AI implementations? How do you monitor unstructured data that feeds into your AI systems? When hallucinations occur, is it really the model at fault, or is your data the true culprit? Understanding the relationship between data quality and AI performance is becoming essential knowledge for professionals looking to build trustworthy AI systems. Shane Murray is a seasoned data and analytics executive with extensive experience leading digital transformation and data strategy across global media and technology organizations. He currently serves as Senior Vice President of Digital Platform Analytics at Versant Media, where he oversees the development and optimization of analytics capabilities that drive audience engagement and business growth. In addition to his corporate leadership role, he is a founding member of InvestInData, an angel investor collective of data leaders supporting early-stage startups advancing innovation in data and AI. Prior to joining Versant Media, Shane spent over three years at Monte Carlo, where he helped shape AI product strategy and customer success initiatives as Field CTO. Earlier, he spent nearly a decade at The New York Times, culminating as SVP of Data & Insights, where he was instrumental in scaling the company’s data platforms and analytics functions during its digital transformation. His earlier career includes senior analytics roles at Accenture Interactive, Memetrics, and Woolcott Research. Based in New York, Shane continues to be an active voice in the data community, blending strategic vision with deep technical expertise to advance the role of data in modern business. In the episode, Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more. Links Mentioned in the Show: Versant MediaConnect with ShaneCourse: Responsible AI PracticesRelated Episode: Scaling Data Quality in the Age of Generative AI with Barr Moses, CEO of Monte Carlo Data, Prukalpa Sankar, Cofounder at Atlan, and George Fraser, CEO at FivetranRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Chris Lafakis , Marisa DiNatale (Moody's Analytics)

Fellow Moody's colleague Chris Lafakis joins Mark, Marisa, and Cris as they discuss current economic trends and Chris's recent study on the macroeconomic consequences of hurricanes. Mark starts the conversation by sharing his questions about the latest data on layoffs and how AI is influencing the economy. The team members share their different perspectives before shifting the discussion to the economic toll of Hurricane Melissa and how storms can affect regional economies. Guest: Chris Lafakis – Director of Economic Research, Moody's Analytics For Chris's research on the hurricanes and their economics impacts, click here: https://www.economy.com/the-macroeconomic-consequences-of-a-category-5-miami-hurricane Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

How does a rapidly scaling, collaboration-centric company like Miro empower teams to make faster, evidence-based decisions? By leveraging Snowflake as the foundation of its data ecosystem. This session explores how Miro integrates hundreds of data sources with substantial data volumes into a single, unified platform and manages data transformations at scale. But mostly; learn how Miro is scaling business outcomes through effortless data sharing and timely analytics.

Boels Rental is one of Europe’s leading providers of equipment and tool hire. The Data & Analytics program sought to unify data on Snowflake, standardize analytics, and establish a cohesive framework for data-driven decision-making.

In this session, Roy Louvenberg and Ralph Knoops at Boels Rental, will share how the team leveraged Fivetran to rapidly and securely connect diverse data sources into a single central platform—empowering business operations, driving insights, and maximizing ROI.

Join this session to discover:

•Why fresh, reliable data is essential for analytics and business processes at Boels Rental •How Fivetran enables seamless integration from a wide range of sources, including SAP S/4HANA, Db2, and SQL Server

In this session, discover how organizations are extracting actionable insights from text, documents, images and audio — all in Snowflake Cortex AI. This session reveals practical techniques for building integrated multimodal analytics pipelines using Cortex AI SQL functions and Document AI. Learn how to orchestrate complex, multi-step data analysis across previously siloed data types — simply, with SQL.

Seats are limited to 16 participants to ensure a focused, interactive discussion. Registration on request. 

This roundtable will explore how top financial institutions are leveraging Snowflake for data engineering, analytics, and the next wave of AI workloads, the latest Snowflake innovations in AI, and best practices to accelerate regulatory reporting (DORA, ESG, BCBS) and harnessing Snowflake’s marketplace of 800+ FSI data providers. 

Join Toyota Motor Europe to discover their journey towards a fully operationalized Data Mesh with dbt and Snowflake.

TME (Toyota Motos Europe), one of biggest automobile manufacturing companies, oversees the wholesale sales and marketing of Toyota and Lexus vehicles in Europe. This session will showcase how dbt Cloud and Snowflake are supporting their data strategy.

They will elaborate on challenges faced along the way, and how their platform is supporting their future vision, e.g. enabling advanced real-time analytics, scaling while maintaining governance and best practices and setting themselves up with a strong data foundation to launch their AI/ML initiatives.

In this episode of Hub & Spoken, Jason Foster, CEO and Founder of Cynozure, speaks with Roberto Maranca, data & digital transformation expert and author of Data Excellence. They explore what it really means to build a 'data fit' organisation, one that treats data capability like physical fitness by understanding where you are, training for where you want to be and making improvement a daily routine. Drawing from ancient philosophy and modern business, Roberto explains how concepts from Socrates and Aristotle can help leaders rethink culture, value and human responsibility in an AI-driven world. Together, they discuss how organisations can: Shift from seeing data as a tech issue to a leadership mindset Build collective intelligence and cultural readiness Stay human in the age of intelligent machines Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear – The system for modern product development. — Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development. In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works. — Timestamps (00:00) Intro (02:17) Vibe coding vs. AI-assisted engineering (06:07) How Addy uses AI tools (13:10) Addy’s learnings about applying AI for development (18:47) Addy’s favorite tools (22:15) The 70% Problem (28:15) Tactics for efficient LLM usage (32:58) How AI tools evolved (34:29) The case for keeping expectations low and control high (38:05) Autonomous agents and working with them (42:49) How the EM and PM role changes with AI (47:14) The rise of new roles and shifts in developer education (48:11) The importance of critical thinking when working with AI (54:08) LLMs as a tool for learning (1:03:50) Rapid questions — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Vibe Coding as a software engineer •⁠ How AI-assisted coding will change software engineering: hard truths •⁠ AI Engineering in the real world •⁠ The AI Engineering stack •⁠ How Claude Code is built — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

--- Miami CDO Cheriene Floyd shares how Generative AI is shifting the way cities think about their data.

--- A Chief Data Officer’s role in cities is to turn data into a strategic asset, enabling insights that can be leveraged for resident impact. How is this responsibility changing in the age of generative AI?

--- We’re joined today by Cheriene Floyd to discuss the shift in how CDOs are making data work for their residents. Floyd discusses her path from serving as a strategic planning and performance manager in the City of Miami to becoming the city’s first Chief Data Officer. During her ten years of service as a CDO, she has come to view the role as upholding three key pillars: data governance, analytics, and capacity-building, helping departments connect the dots between disparate datasets to see the bigger picture.

--- As AI changes our relationship to data, it further highlights the adage, “garbage in, garbage out.” Floyd discusses how broad awareness of this truth has manifested in greater buy-in among city staff to leverage data to solve problems, while private sector AI adoption has shifted residents’ expectations when seeking public services. Consequently, the task of shepherding public data becomes even more important, and she offers recommendations from her own experiences to meet these challenges.

--- Learn more about GovEx!

On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts.

In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment. 

Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity.

If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you!  

Highlights/ Skip to:

Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future  (48:35)

Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar.

Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below:

Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas 

“Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas 

“Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian

“Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas

“Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian

Links

Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected]