talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

12

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Joe Reis ×

Sujay Dutta and Sidd Rajagopal, authors of "Data as the Fourth Pillar," join the show to make the compelling case that for C-suite leaders obsessed with AI, data must be elevated to the same level as people, process, and technology. They provide a practical playbook for Chief Data Officers (CDOs) to escape the "cost center" trap by focusing on the "demand side" (business value) instead of just the "supply side" (technology). They also introduce frameworks like "Data Intensity" and "Total Addressable Value (TAV)" for data. We also tackle the reality of AI "slopware" and the "Great Pacific garbage patch" of junk data , explaining how to build the critical "context" (or "Data Intelligence Layer") that most GenAI projects are missing. Finally, they explain why the CDO must report directly to the CEO to play "offense," not defense.

Face To Face
by Shachar Meir (Shachar Meir) , Guy Fighel (Hetz Ventures) , Rob Hulme , Sarah Levy (Euno) , Harry Gollop (Cognify Search) , Joe Reis (DeepLearning.AI)

Practicing analytics well takes more than just tools and tech. It requires data modeling practices that unify and empower all teams within analytics, from engineers to analysts. This is especially true as AI becomes a part of analytics. Without a governed data model that provides consistent data interpretation, AI tools are left to guess. Join panelists Joe Reis, Sarah Levy, Harry Gollop, Rob Hulme, Shachar Meir, and Guy Fighel, as they share battle-tested advice on overcoming conflicting definitions and accurately mapping business intent to data, reports and dashboards at scale. This panel is for data & analytics engineers seeking a clear framework to capture business logic across layers, and for data leaders focused on building a reliable foundation for Gen AI.

What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn

podcast_episode
by Vijay Yadav (Center for Mathematical Sciences at Merck) , Joe Reis (DeepLearning.AI)

Vijay Yadav (Director of Data Science at Merck) joins me to chat about a very interesting project he launched at Merck involving LLMs in production. A big part of this discussion is how to make data ready for generative AI.

This is a great example of an LLM-native use case in production, which are rare right now. Lots to learn from here. Enjoy!

LinkedIn: https://www.linkedin.com/in/vijay-yadav-ds/

Kishore Aradhya and I both teach, and we agree this is a very difficult landscape to determine what and how to teach. Against the backdrop of generative AI, we discuss the role of universities in teaching tech and data, the role of a teacher, how to teach data, and much more.

DSPY - https://github.com/stanfordnlp/dspy

Wendy Turner-Williams joins me to chat about her new project and communty, The Association.ai, unleashing generative AI in organizations, starting and building a community, and much more.

LinkedIn: https://www.linkedin.com/in/wendy-turner-williams-8b66039/

The Association: https://theassociation.ai/

ChatGPT was the iPhone moment for AI, and things are moving insanely quickly. What do generative AI models mean for us, especially children, who are arguably the last of the Pre-AI generation? I dive into some thoughts this week about how we need to work alongside the machines, the impact of generative AI on kids, and so on. Buckle up. We are in for a very interesting next few years as we sort out where AI fits into our day-to-day lives.

data #datascience #dataengineering #chatgpt #ai


If you like this show, give it a 5-star rating on your favorite podcast platform.

Purchase Fundamentals of Data Engineering at your favorite bookseller.

Check out my substack: https://joereis.substack.com/

Ryan Dolley and I chat about why BI needs to evolve, moving beyond dashboards, the impact of generative AI on analytics, SuperDataBros, and more.

data #analytics #businessintelligence #datascience


If you like this show, give it a 5-star rating on your favorite podcast platform.

Purchase Fundamentals of Data Engineering at your favorite bookseller.

Check out my substack: https://joereis.substack.com/