AI agents are rapidly evolving, from early efforts to simulate human cognition and social behavior to sophisticated multi-agent systems that can act as practical collaborators in everyday work. This talk covers advances in multi-agent frameworks that simulate social reasoning, cooperation, and emergent communication, and discusses how these capabilities are being translated into workplace applications supporting brainstorming, narrative development, and research design.
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Brought to You By: • Statsig — The unified platform for flags, analytics, experiments, and more. • Linear – The system for modern product development. — Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development. In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works. — Timestamps (00:00) Intro (02:17) Vibe coding vs. AI-assisted engineering (06:07) How Addy uses AI tools (13:10) Addy’s learnings about applying AI for development (18:47) Addy’s favorite tools (22:15) The 70% Problem (28:15) Tactics for efficient LLM usage (32:58) How AI tools evolved (34:29) The case for keeping expectations low and control high (38:05) Autonomous agents and working with them (42:49) How the EM and PM role changes with AI (47:14) The rise of new roles and shifts in developer education (48:11) The importance of critical thinking when working with AI (54:08) LLMs as a tool for learning (1:03:50) Rapid questions — The Pragmatic Engineer deepdives relevant for this episode: • Vibe Coding as a software engineer • How AI-assisted coding will change software engineering: hard truths • AI Engineering in the real world • The AI Engineering stack • How Claude Code is built — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
--- Miami CDO Cheriene Floyd shares how Generative AI is shifting the way cities think about their data.
--- A Chief Data Officer’s role in cities is to turn data into a strategic asset, enabling insights that can be leveraged for resident impact. How is this responsibility changing in the age of generative AI?
--- We’re joined today by Cheriene Floyd to discuss the shift in how CDOs are making data work for their residents. Floyd discusses her path from serving as a strategic planning and performance manager in the City of Miami to becoming the city’s first Chief Data Officer. During her ten years of service as a CDO, she has come to view the role as upholding three key pillars: data governance, analytics, and capacity-building, helping departments connect the dots between disparate datasets to see the bigger picture.
--- As AI changes our relationship to data, it further highlights the adage, “garbage in, garbage out.” Floyd discusses how broad awareness of this truth has manifested in greater buy-in among city staff to leverage data to solve problems, while private sector AI adoption has shifted residents’ expectations when seeking public services. Consequently, the task of shepherding public data becomes even more important, and she offers recommendations from her own experiences to meet these challenges.
--- Learn more about GovEx!
Send us a text Exploring the intersection of creativity and innovation, Geoff Thatcher, Founder and CCO Creative Principals, shares his insights on how AI is revolutionizing live experiences. From personalized exhibits to universal storytelling, Geoff delves into the possibilities and pitfalls of harnessing AI to elevate our connections with others. 01:39 Introducing Geoff Thatcher 02:48 Thinking about AI Differently 13:47 What Does it Mean to "Create" Now? 19:32 Augmented Intelligence 22:27 Death By PowerPoint 23:57 Five Rules for Using AI 24:47 Difference between Better and Easier? 30:16 Don't Let AI Steal Moments of Inspiration 33:18 Always Use the Most Reliable Source 34:57 Use AI to Tell Stories 37:36 The Worry of Getting Lazy 39:48 Humanizing AI! LinkedIn: https://www.linkedin.com/in/geoffthatcher/ Website: https://www.creativeprincipals.com/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Jeremiah Lowin, founder of Prefect , returns to the show to discuss the seismic shift in the data and AI landscape since our last conversation a few years ago. He shares the wild origin story of FastMCP, a project he started to create a more "Pythonic" wrapper for Anthropic's Model Context Protocol (MCP).
Jeremiah explains how this side project was incorporated into Anthropic's official SDK and then exploded to over a million downloads a day after MCP gained support from OpenAI and Google. He clarifies why this is an complementary expansion for Prefect, not a pivot , and provides a simple analogy for MCP as the "USB-C for AI agents". Most surprisingly, Jeremiah reveals that the primary adoption of MCP isn't for external products, but internally by data teams who are using it to finally fulfill the promise of the self-serve semantic layer and create a governable, "LLM-free zone" for AI tools.
The potential of machine learning today is extraordinary, yet many aspiring developers and tech professionals find themselves daunted by its complexity. Whether you're looking to enhance your skill set and apply machine learning to real-world projects or are simply curious about how AI systems function, this book is your jumping-off place. With an approachable yet deeply informative style, author Aurélien Géron delivers the ultimate introductory guide to machine learning and deep learning. Drawing on the Hugging Face ecosystem, with a focus on clear explanations and real-world examples, the book takes you through cutting-edge tools like Scikit-Learn and PyTorch—from basic regression techniques to advanced neural networks. Whether you're a student, professional, or hobbyist, you'll gain the skills to build intelligent systems. Understand ML basics, including concepts like overfitting and hyperparameter tuning Complete an end-to-end ML project using scikit-Learn, covering everything from data exploration to model evaluation Learn techniques for unsupervised learning, such as clustering and anomaly detection Build advanced architectures like transformers and diffusion models with PyTorch Harness the power of pretrained models—including LLMs—and learn to fine-tune them Train autonomous agents using reinforcement learning
On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts.
In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment.
Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity.
If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you!
Highlights/ Skip to:
Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future (48:35)
Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar.
Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below:
Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas
“Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas
“Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian
“Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas
“Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian
Links
Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected]
What happens when an AI starts asking better questions than you? In this 60-minute episode, I share the real story behind “The AI That Thinks Like an Analyst” — a Streamlit + GPT-4 project that changed the way I see data, curiosity, and creativity. This isn’t a technical tutorial. It’s a journey into the mind of a data professional learning to think deeper — and how building this AI taught me the most human lesson of all: how to stay curious. We’ll explore: Why the hardest part of analysis isn’t code — it’s curiosity.How I built a privacy-first Streamlit app that generates questions instead of answers.What AI can teach us about slowing down, observing, and thinking like explorers.The moment I realized data analysis and self-reflection are the same skill.If you’ve ever felt stuck staring at your data, unsure what to ask next — this episode is for you. 📖 Read the full story: https://mukundansankar.substack.com/p/the-no-upload-ai-analyst-v4-secure Join the Discussion (comments hub): https://mukundansankar.substack.com/notesTools I use for my Podcast and Affiliate PartnersRecording Partner: Riverside → Sign up here (affiliate)Host Your Podcast: RSS.com (affiliate )Research Tools: Sider.ai (affiliate)Sourcetable AI: Join Here(affiliate)🔗 Connect with Me:Free Email NewsletterWebsite: Data & AI with MukundanGitHub: https://github.com/mukund14Twitter/X: @sankarmukund475LinkedIn: Mukundan SankarYouTube: Subscribe
In this talk, Thaddée will dive into actionable strategies for managing data the right way and techniques to improve output quality in AI-based systems.
Date: 2025-10-28. AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, the speaker argues that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work unifies risk pathways between the two fields and suggests concrete synergies and opportunities for future collaboration. The talk discusses how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms.
AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration. I will discuss how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science, currently used in immediate harms research, can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms. We have a shared goal of building a better, safer future for everyone. Let's work together!
AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration.
I will discuss how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science, currently used in immediate harms research, can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms. We have a shared goal of building a better, safer future for everyone. Let's work together!