Send us a text We go inside Mediahuis to see how a small GenAI team is transforming newsroom workflows without losing editorial judgment. From RAG search to headline suggestions and text‑to‑video assists, this episode shares what works, what doesn’t, and how adoption spreads across brands. You’ll hear about: Ten priority use cases shipped across the groupHeadline and summary suggestions that boost clarity and speedRAG‑powered search turning archives into instant contextText‑to‑video tools that free up local video teamsThe hurdles of adoption, quality, and scaling prototypes into productionTheir playbook blends engineering discipline with editorial empathy: use rules where you can, prompt carefully when you must, and always keep journalists in the loop. We also cover policies, guardrails, AI literacy, and how to survive model churn with reusable templates and grounded tests. The result: a practical path to AI in media — protecting judgment, raising quality, and scaling tools without losing each brand’s voice. 🎧 If this sparks ideas for your newsroom or product team, follow the show, share with a colleague, and leave a quick review with your favorite takeaway.
talk-data.com
Topic
GenAI
Generative AI
281
tagged
Activity Trend
Top Events
Send us a text In this episode of Data Topics, Ben speaks with Kim Smets, VP Data & AI at Telenet, about his journey from early machine learning work to leading enterprise-wide AI transformation at Telenet. Kim shares how he built a central data & AI team, shifted from fragmented reporting to product thinking, and embedded governance that actually works. They discuss the importance of simplicity, storytelling, and sustainable practices in making AI easy, relevant, and famous across the business. From GenAI exploration to real-world deployment, this episode is packed with practical insights on scaling AI with purpose.
Summary In this episode of the Data Engineering Podcast Ariel Pohoryles, head of product marketing for Boomi's data management offerings, talks about a recent survey of 300 data leaders on how organizations are investing in data to scale AI. He shares a paradox uncovered in the research: while 77% of leaders trust the data feeding their AI systems, only 50% trust their organization's data overall. Ariel explains why truly productionizing AI demands broader, continuously refreshed data with stronger automation and governance, and highlights the challenges posed by unstructured data and vector stores. The conversation covers the need to shift from manual reviews to automated pipelines, the resurgence of metadata and master data management, and the importance of guardrails, traceability, and agent governance. Ariel also predicts a growing convergence between data teams and application integration teams and advises leaders to focus on high-value use cases, aggressive pipeline automation, and cataloging and governing the coming sprawl of AI agents, all while using AI to accelerate data engineering itself.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Ariel Pohoryles about data management investments that organizations are making to enable them to scale AI implementationsInterview IntroductionHow did you get involved in the area of data management?Can you start by describing the motivation and scope of your recent survey on data management investments for AI across your respondents?What are the key takeaways that were most significant to you?The survey reveals a fascinating paradox: 77% of leaders trust the data used by their AI systems, yet only half trust their organization's overall data quality. For our data engineering audience, what does this suggest about how companies are currently sourcing data for AI? Does it imply they are using narrow, manually-curated "golden datasets," and what are the technical challenges and risks of that approach as they try to scale?The report highlights a heavy reliance on manual data quality processes, with one expert noting companies feel it's "not reliable to fully automate validation" for external or customer data. At the same time, maturity in "Automated tools for data integration and cleansing" is low, at only 42%. What specific technical hurdles or organizational inertia are preventing teams from adopting more automation in their data quality and integration pipelines?There was a significant point made that with generative AI, "biases can scale much faster," making automated governance essential. From a data engineering perspective, how does the data management strategy need to evolve to support generative AI versus traditional ML models? What new types of data quality checks, lineage tracking, or monitoring for feedback loops are required when the model itself is generating new content based on its own outputs?The report champions a "centralized data management platform" as the "connective tissue" for reliable AI. How do you see the scale and data maturity impacting the realities of that effort?How do architectural patterns in the shape of cloud warehouses, lakehouses, data mesh, data products, etc. factor into that need for centralized/unified platforms?A surprising finding was that a third of respondents have not fully grasped the risk of significant inaccuracies in their AI models if they fail to prioritize data management. In your experience, what are the biggest blind spots for data and analytics leaders?Looking at the maturity charts, companies rate themselves highly on "Developing a data management strategy" (65%) but lag significantly in areas like "Automated tools for data integration and cleansing" (42%) and "Conducting bias-detection audits" (24%). If you were advising a data engineering team lead based on these findings, what would you tell them to prioritize in the next 6-12 months to bridge the gap between strategy and a truly scalable, trustworthy data foundation for AI?The report states that 83% of companies expect to integrate more data sources for their AI in the next year. For a data engineer on the ground, what is the most important capability they need to build into their platform to handle this influx?What are the most interesting, innovative, or unexpected ways that you have seen teams addressing the new and accelerated data needs for AI applications?What are some of the noteworthy trends or predictions that you have for the near-term future of the impact that AI is having or will have on data teams and systems?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links BoomiData ManagementIntegration & Automation DemoAgentstudioData Connector Agent WebinarSurvey ResultsData GovernanceShadow ITPodcast EpisodeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sujay Dutta and Sidd Rajagopal, authors of "Data as the Fourth Pillar," join the show to make the compelling case that for C-suite leaders obsessed with AI, data must be elevated to the same level as people, process, and technology. They provide a practical playbook for Chief Data Officers (CDOs) to escape the "cost center" trap by focusing on the "demand side" (business value) instead of just the "supply side" (technology). They also introduce frameworks like "Data Intensity" and "Total Addressable Value (TAV)" for data. We also tackle the reality of AI "slopware" and the "Great Pacific garbage patch" of junk data , explaining how to build the critical "context" (or "Data Intelligence Layer") that most GenAI projects are missing. Finally, they explain why the CDO must report directly to the CEO to play "offense," not defense.
Data quality and AI reliability are two sides of the same coin in today's technology landscape. Organizations rushing to implement AI solutions often discover that their underlying data infrastructure isn't prepared for these new demands. But what specific data quality controls are needed to support successful AI implementations? How do you monitor unstructured data that feeds into your AI systems? When hallucinations occur, is it really the model at fault, or is your data the true culprit? Understanding the relationship between data quality and AI performance is becoming essential knowledge for professionals looking to build trustworthy AI systems. Shane Murray is a seasoned data and analytics executive with extensive experience leading digital transformation and data strategy across global media and technology organizations. He currently serves as Senior Vice President of Digital Platform Analytics at Versant Media, where he oversees the development and optimization of analytics capabilities that drive audience engagement and business growth. In addition to his corporate leadership role, he is a founding member of InvestInData, an angel investor collective of data leaders supporting early-stage startups advancing innovation in data and AI. Prior to joining Versant Media, Shane spent over three years at Monte Carlo, where he helped shape AI product strategy and customer success initiatives as Field CTO. Earlier, he spent nearly a decade at The New York Times, culminating as SVP of Data & Insights, where he was instrumental in scaling the company’s data platforms and analytics functions during its digital transformation. His earlier career includes senior analytics roles at Accenture Interactive, Memetrics, and Woolcott Research. Based in New York, Shane continues to be an active voice in the data community, blending strategic vision with deep technical expertise to advance the role of data in modern business. In the episode, Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more. Links Mentioned in the Show: Versant MediaConnect with ShaneCourse: Responsible AI PracticesRelated Episode: Scaling Data Quality in the Age of Generative AI with Barr Moses, CEO of Monte Carlo Data, Prukalpa Sankar, Cofounder at Atlan, and George Fraser, CEO at FivetranRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
--- Miami CDO Cheriene Floyd shares how Generative AI is shifting the way cities think about their data.
--- A Chief Data Officer’s role in cities is to turn data into a strategic asset, enabling insights that can be leveraged for resident impact. How is this responsibility changing in the age of generative AI?
--- We’re joined today by Cheriene Floyd to discuss the shift in how CDOs are making data work for their residents. Floyd discusses her path from serving as a strategic planning and performance manager in the City of Miami to becoming the city’s first Chief Data Officer. During her ten years of service as a CDO, she has come to view the role as upholding three key pillars: data governance, analytics, and capacity-building, helping departments connect the dots between disparate datasets to see the bigger picture.
--- As AI changes our relationship to data, it further highlights the adage, “garbage in, garbage out.” Floyd discusses how broad awareness of this truth has manifested in greater buy-in among city staff to leverage data to solve problems, while private sector AI adoption has shifted residents’ expectations when seeking public services. Consequently, the task of shepherding public data becomes even more important, and she offers recommendations from her own experiences to meet these challenges.
--- Learn more about GovEx!
The promise of AI in enterprise settings is enormous, but so are the privacy and security challenges. How do you harness AI's capabilities while keeping sensitive data protected within your organization's boundaries? Private AI—using your own models, data, and infrastructure—offers a solution, but implementation isn't straightforward. What governance frameworks need to be in place? How do you evaluate non-deterministic AI systems? When should you build in-house versus leveraging cloud services? As data and software teams evolve in this new landscape, understanding the technical requirements and workflow changes is essential for organizations looking to maintain control over their AI destiny. Manasi Vartak is Chief AI Architect and VP of Product Management (AI Platform) at Cloudera. She is a product and AI leader with more than a decade of experience at the intersection of AI infrastructure, enterprise software, and go-to-market strategy. At Cloudera, she leads product and engineering teams building low-code and high-code generative AI platforms, driving the company’s enterprise AI strategy and enabling trusted AI adoption across global organizations. Before joining Cloudera through its acquisition of Verta, Manasi was the founder and CEO of Verta, where she transformed her MIT research into enterprise-ready ML infrastructure. She scaled the company to multi-million ARR, serving Fortune 500 clients in finance, insurance, and capital markets, and led the launch of enterprise MLOps and GenAI products used in mission-critical workloads. Manasi earned her PhD in Computer Science from MIT, where she pioneered model management systems such as ModelDB — foundational work that influenced the development of tools like MLflow. Earlier in her career, she held research and engineering roles at Twitter, Facebook, Google, and Microsoft. In the episode, Richie and Manasi explore AI's role in financial services, the challenges of AI adoption in enterprises, the importance of data governance, the evolving skills needed for AI development, the future of AI agents, and much more. Links Mentioned in the Show: ClouderaCloudera Evolve ConferenceCloudera Agent StudioConnect with ManasiCourse: Introduction to AI AgentsRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Adnan Hodzic, Lead Engineer and GenAI Delivery Lead at ING, joined Yuliia how ING successfully scaled generative AI from experimentation to enterprise production. With over 60 GenAI applications now running in production across the bank, Adnan explains ING's pragmatic approach: building internal AI platforms that balance innovation speed with regulatory compliance, treating European banking regulations as features rather than constraints, and fostering a culture where 300+ experiments can safely run while only the best reach production. He discusses the critical role of their Prompt Flow Studio in democratizing AI development, why customer success teams saw immediate productivity gains, how ING structures AI governance without killing innovation, and his perspective on the hype cycle versus real enterprise value. Adnan's blog: https://foolcontrol.org Adnan's Youtube channel: https://www.youtube.com/AdnanHodzicLinkedIn: https://linkedin.com/in/AdnanHodzicTwitter/X: https://twitter.com/fooctrl
The role of data analysts is evolving, not disappearing. With generative AI transforming the industry, many wonder if their analytical skills will soon become obsolete. But how is the relationship between human expertise and AI tools really changing? While AI excels at coding, debugging, and automating repetitive tasks, it struggles with understanding complex business problems and domain-specific challenges. What skills should today's data professionals focus on to remain relevant? How can you leverage AI as a partner rather than viewing it as a replacement? The balance between technical expertise and business acumen has never been more critical in navigating this changing landscape. Mo Chen is a Data & Analytics Manager with over seven years of experience in financial and banking data. Currently at NatWest Group, Mo leads initiatives that enhance data management, automate reporting, and improve decision-making across the organization. After earning an MSc in Finance & Economics from the University of St Andrews, Mo launched a career in risk and credit portfolio management before transitioning into analytics. Blending economics, finance, and data engineering, Mo is skilled at turning large-scale financial data into actionable insight that supports efficiency and strategic planning. Beyond corporate life, Mo has become a passionate educator and community-builder. On YouTube, Mo hosts a fast-growing channel (185K+ subscribers, with millions of views) where he breaks down complex analytics concepts into bite-sized, actionable lessons. In the episode, Richie and Mo explore the evolving role of data analysts, the impact of AI on coding and debugging, the importance of domain knowledge for career switchers, effective communication strategies in data analysis, and much more. Links Mentioned in the Show: Mo’s Website - Build a Data Portfolio WebsiteMo’s YouTube ChannelConnect with MoGet Certified as a Data AnalystRelated Episode: Career Skills for Data Professionals with Wes Kao, Co-Founder of MavenRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
In this episode, we talked with Ranjitha Kulkarni, a machine learning engineer with a rich career spanning Microsoft, Dropbox, and now NeuBird AI. Ranjitha shares her journey into ML and NLP, her work building recommendation systems, early AI agents, and cutting-edge LLM-powered products. She offers insights into designing reliable AI systems in the new era of generative AI and agents, and how context engineering and dynamic planning shape the future of AI products.TIMECODES00:00 Career journey and early curiosity04:25 Speech recognition at Microsoft05:52 Recommendation systems and early agents at Dropbox07:44 Joining NewBird AI12:01 Defining agents and LLM orchestration16:11 Agent planning strategies18:23 Agent implementation approaches22:50 Context engineering essentials30:27 RAG evolution in agent systems37:39 RAG vs agent use cases40:30 Dynamic planning in AI assistants43:00 AI productivity tools at Dropbox46:00 Evaluating AI agents53:20 Reliable tool usage challenges58:17 Future of agents in engineering Connect with Ranjitha- Linkedin - https://www.linkedin.com/in/ranjitha-gurunath-kulkarniConnect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/
Você já pensou em como a Inteligência Artificial generativa está transformando o jeito que grandes empresas criam produtos digitais? Neste episódio, conversamos com o time do Grupo Boticário para entender como a companhia está unindo tecnologia e inovação para transformar o futuro da beleza. Exploramos como a GenAI vem impulsionando o desenvolvimento de produtos digitais e potencializando o trabalho de analistas, times de produto e engenharia com ferramentas. Falamos sobre os bastidores da Semana de IA GB, os aprendizados que ela trouxe para o negócio e como a GenAI está ajudando os times a ganharem eficiência e profundidade nas análises. Se você quer entender como uma das maiores empresas de beleza do país está moldando sua cultura de produto e engenharia para o futuro, esse episódio é para você! Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Convidados: Bruno Fuzetti Penso - Gerente Sênior de Plataforma Thayana Borba - Gerente Sênior de Produtos Digitais João Alves De Oliveira Neto - Gerente Sênior Produtos de Dados Nossa Bancada Data Hackers: Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart. Monique Femme — Head of Community Management na Data Hackers Canais do Grupo Boticário: LinkedIn do GB Página de vagas do GB Instagram do GB Referências: Plataforma de Desenvolvimento (Alquimia) https://github.com/customer-stories/grupoboticario https://medium.com/gbtech/plataforma-do-desenvolvimento-grupo-botic%C3%A1rio-61b1aaddbc9b https://medium.com/gbtech/opentelemetry-na-nova-plataforma-de-integra%C3%A7%C3%A3o-350e744b6a5f https://aws.amazon.com/pt/solutions/case-studies/grupo-boticario-summit/
Send us a text Replay Episode: Python, Anaconda, and the AI Frontier with Peter Wang Peter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda’s new AI incubator, the future of GenAI, and why Python isn’t just “still a thing”… it’s the thing. From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today. Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda’s Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership Style
Linkedin: linkedin.com/in/pzwang Website: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
--- Oliver Wise has been a data leader in local and federal government, as well as the private sector, and as GovEx’s new Executive Director, he’s betting on cities to lead the way to Gen AI-driven innovation.
--- Learn more about GovEx --- Fill out our listener survey
Summary In this crossover episode of the AI Engineering Podcast, host Tobias Macey interviews Brijesh Tripathi, CEO of Flex AI, about revolutionizing AI engineering by removing DevOps burdens through "workload as a service". Brijesh shares his expertise from leading AI/HPC architecture at Intel and deploying supercomputers like Aurora, highlighting how access friction and idle infrastructure slow progress. Join them as they discuss Flex AI's innovative approach to simplifying heterogeneous compute, standardizing on consistent Kubernetes layers, and abstracting inference across various accelerators, allowing teams to iterate faster without wrestling with drivers, libraries, or cloud-by-cloud differences. Brijesh also shares insights into Flex AI's strategies for lifting utilization, protecting real-time workloads, and spanning the full lifecycle from fine-tuning to autoscaled inference, all while keeping complexity at bay.
Pre-amble I hope you enjoy this cross-over episode of the AI Engineering Podcast, another show that I run to act as your guide to the fast-moving world of building scalable and maintainable AI systems. As generative AI models have grown more powerful and are being applied to a broader range of use cases, the lines between data and AI engineering are becoming increasingly blurry. The responsibilities of data teams are being extended into the realm of context engineering, as well as designing and supporting new infrastructure elements that serve the needs of agentic applications. This episode is an example of the types of work that are not easily categorized into one or the other camp.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Brijesh Tripathi about FlexAI, a platform offering a service-oriented abstraction for AI workloadsInterview IntroductionHow did you get involved in machine learning?Can you describe what FlexAI is and the story behind it?What are some examples of the ways that infrastructure challenges contribute to friction in developing and operating AI applications?How do those challenges contribute to issues when scaling new applications/businesses that are founded on AI?There are numerous managed services and deployable operational elements for operationalizing AI systems. What are some of the main pitfalls that teams need to be aware of when determining how much of that infrastructure to own themselves?Orchestration is a key element of managing the data and model lifecycles of these applications. How does your approach of "workload as a service" help to mitigate some of the complexities in the overall maintenance of that workload?Can you describe the design and architecture of the FlexAI platform?How has the implementation evolved from when you first started working on it?For someone who is going to build on top of FlexAI, what are the primary interfaces and concepts that they need to be aware of?Can you describe the workflow of going from problem to deployment for an AI workload using FlexAI?One of the perennial challenges of making a well-integrated platform is that there are inevitably pre-existing workloads that don't map cleanly onto the assumptions of the vendor. What are the affordances and escape hatches that you have built in to allow partial/incremental adoption of your service?What are the elements of AI workloads and applications that you are explicitly not trying to solve for?What are the most interesting, innovative, or unexpected ways that you have seen FlexAI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on FlexAI?When is FlexAI the wrong choice?What do you have planned for the future of FlexAI?Contact Info LinkedInParting Question From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Links Flex AIAurora Super ComputerCoreWeaveKubernetesCUDAROCmTensor Processing Unit (TPU)PyTorchTritonTrainiumASIC == Application Specific Integrated CircuitSOC == System On a ChipLoveableFlexAI BlueprintsTenstorrentThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
In this episode, we talk with Michael Lanham, an AI and software innovator with over two decades of experience spanning game development, fintech, oil and gas, and agricultural tech. Michael shares his journey from building neural network-based games and evolutionary algorithms to writing influential books on AI agents and deep learning. He offers insights into the evolving AI landscape, practical uses of AI agents, and the future of generative AI in gaming and beyond.
TIMECODES 00:00 Micheal Lanham’s career journey and AI agent books 05:45 Publishing journey: AR, Pokémon Go, sound design, and reinforcement learning 10:00 Evolution of AI: evolutionary algorithms, deep learning, and agents 13:33 Evolutionary algorithms in prompt engineering and LLMs 18:13 AI agent books second edition and practical applications 20:57 AI agent workflows: minimalism, task breakdown, and collaboration 26:25 Collaboration and orchestration among AI agents 31:24 Tools and reasoning servers for agent communication 35:17 AI agents in game development and generative AI impact 38:57 Future of generative AI in gaming and immersive content 41:42 Coding agents, new LLMs, and local deployment 45:40 AI model trends and data scientist career advice 53:36 Cognitive testing, evaluation, and monitoring in AI 58:50 Publishing details and closing remarks
Connect with Micheal Linkedin - https://www.linkedin.com/in/micheal-lanham-189693123/ Connect with DataTalks.Club: Join the community - https://datatalks.club/slack.htmlSubscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/...Check other upcoming events - https://lu.ma/dtc-eventsGitHub: https://github.com/DataTalksClubLinkedIn - / datatalks-club Twitter - / datatalksclub Website - https://datatalks.club/
Send us a text What if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.
We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it’s changing the game for enterprises worldwide. We dive into Snow Leopard’s innovative approach to data retrieval, semantic intelligence, and governance-first architecture. 04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View🔗 LinkedIn: https://www.linkedin.com/in/thedeepti/ 🔗 Website: https://www.snowleopard.ai/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Financial institutions are racing to harness the power of AI, but the path to implementation is filled with challenges. From feature engineering to model deployment, the technical complexities of AI adoption in finance require careful navigation of both technological and regulatory landscapes. How do you build AI systems that satisfy strict compliance requirements while still delivering business value? What skills should teams prioritize as AI tools become more accessible through natural language interfaces? With the pressure to reduce model development time from months to days, how can organizations maintain proper governance while still moving at the speed modern business demands? Vijay is a seasoned analytics, product, and technology executive. As EVP of Global Solutions & Analytics at Experian, he runs the department that creates Experian's Ascend financial AI platform. Promoted multiple times in eight years, Vijay now leads a team of more than 70 at Experian. He is one of the youngest execs at Experian, believing strongly in understanding and accepting risk. He has built and run data, engineering, and IT teams, and created market-leading products. In the episode, Richie and Vijay explore the impact of generative AI on the finance industry, the development of Experian's Ascend platform, the challenges of fraud prevention, education and compliance in AI deployment, and much more. Links Mentioned in the Show: ExperianExperian AscendConnect with VijayCourse: Implementing AI Solutions in BusinessRelated Episode: How Generative AI is Transforming Finance with Andrew Reiskind, CDO at MastercardRewatch RADAR AI
New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business
Brought to You By: • Statsig — The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig. • Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself. — The Pragmatic Engineer Podcast is back with the Fall 2025 season. Expect new episodes to be published on most Wednesdays, looking ahead. Code Complete is one of the most enduring books on software engineering. Steve McConnell wrote the 900-page handbook just five years into his career, capturing what he wished he’d known when starting out. Decades later, the lessons remain relevant, and Code Complete remains a best-seller. In this episode, we talk about what has aged well, what needed updating in the second edition, and the broader career principles Steve has developed along the way. From his “career pyramid” model to his critique of “lily pad hopping,” and why periods of working in fast-paced, all-in environments can be so rewarding, the emphasis throughout is on taking ownership of your career and making deliberate choices. We also discuss: • Top-down vs. bottom-up design and why most engineers default to one approach • Why rewriting code multiple times makes it better • How taking a year off to write Code Complete crystallized key lessons • The 3 areas software designers need to understand, and why focusing only on technology may be the most limiting • And much more! Steve rarely gives interviews, so I hope you enjoy this conversation, which we recorded in Seattle. — Timestamps (00:00) Intro (01:31) How and why Steve wrote Code Complete (08:08) What code construction is and how it differs from software development (11:12) Top-down vs. bottom-up design approach (14:46) Why design documents frustrate some engineers (16:50) The case for rewriting everything three times (20:15) Steve’s career before and after Code Complete (27:47) Steve’s career advice (44:38) Three areas software designers need to understand (48:07) Advice when becoming a manager, as a developer (53:02) The importance of managing your energy (57:07) Early Microsoft and why startups are a culture of intense focus (1:04:14) What changed in the second edition of Code Complete (1:10:50) AI’s impact on software development: Steve’s take (1:17:45) Code reviews and GenAI (1:19:58) Why engineers are becoming more full-stack (1:21:40) Could AI be the exception to “no silver bullets?” (1:26:31) Steve’s advice for engineers on building a meaningful career — The Pragmatic Engineer deepdives relevant for this episode: • What changed in 50 years of computing • The past and future of modern backend practices • The Philosophy of Software Design – with John Ousterhout • AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django) • TDD, AI agents and coding – with Kent Beck — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
The manufacturing floor is undergoing a technological revolution with industrial AI at its center. From predictive maintenance to quality control, AI is transforming how products are designed, produced, and maintained. But implementing these technologies isn't just about installing sensors and software—it's about empowering your workforce to embrace new tools and processes. How do you overcome AI hesitancy among experienced workers? What skills should your team develop to make the most of these new capabilities? And with limited resources, how do you prioritize which AI applications will deliver the greatest impact for your specific manufacturing challenges? The answers might be simpler than you think. Barbara Humpton is President and CEO of Siemens Corporation, responsible for strategy and engagement in Siemens’ largest market. Under her leadership, Siemens USA operates across all 50 states and Puerto Rico with 45,000 employees and generated $21.1 billion in revenue in fiscal year 2024. She champions the role of technology in expanding what’s humanly possible and is a strong advocate for workforce development, mentorship, and building sustainable work-life integration. Previously, she was President and CEO of Siemens Government Technologies, leading delivery of Siemens’ products and services to U.S. federal agencies. Before joining Siemens in 2011, she held senior roles at Booz Allen Hamilton and Lockheed Martin, where she oversaw programs in national security, biometrics, border protection, and critical infrastructure, including the FBI’s Next Generation Identification and TSA’s Transportation Workers’ Identification Credential. Olympia Brikis is a seasoned technology and business leader with over a decade of experience in AI research. As the Technology and Engineering Director for Siemens' Industrial AI Research in the U.S., she leads AI strategy, technology roadmapping, and R&D for next-gen AI products. Olympia has a strong track record in developing Generative AI products that integrate industrial and digital ecosystems, driving real-world business impact. She is a recognized thought leader with numerous patents and peer-reviewed publications in AI for manufacturing, predictive analytics, and digital twins. Olympia actively engages with executives, policymakers, and AI practitioners on AI's role in enterprise strategy and workforce transformation. With a background in Computer Science from LMU Munich and an MBA from Wharton, she bridges AI research, product strategy, and enterprise adoption, mentoring the next generation of AI leaders. In the episode, Richie, Barbara, and Olympia explore the transformative power of AI in manufacturing, from predictive maintenance to digital twins, the role of industrial AI in enhancing productivity, the importance of empowering workers with new technology, real-world applications, overcoming AI hesitancy, and much more. Links Mentioned in the Show: Siemens Industrial AI SuiteConnect with Barbara and OlympiaCourse: Implementing AI Solutions in BusinessRelated Episode: Master Your Inner Game to Avoid Burnout with Klaus Kleinfeld, Former CEO at Alcoa and SiemensRewatch RADAR AI where...
The line between human work and AI capabilities is blurring in today's business environment. AI agents are now handling autonomous tasks across customer support, data management, and sales prospecting with increasing sophistication. But how do you effectively integrate these agents into your existing workflows? What's the right approach to training and evaluating AI team members? With data quality being the foundation of successful AI implementation, how can you ensure your systems have the unified context they need while maintaining proper governance and privacy controls? Karen Ng is the Head of Product at HubSpot, where she leads product strategy, design, and partnerships with the mission of helping millions of organizations grow better. Since joining in 2022, she has driven innovation across Smart CRM, Operations Hub, Breeze Intelligence, and the developer ecosystem, with a focus on unifying structured and unstructured data to make AI truly useful for businesses. Known for leading with clarity and “AI speed,” she pushes HubSpot to stay ahead of disruption and empower customers to thrive. Previously, Karen held senior product leadership roles at Common Room, Google, and Microsoft. At Common Room, she built the product and data science teams from the ground up, while at Google she directed Android’s product frameworks like Jetpack and Jetpack Compose. During more than a decade at Microsoft, she helped shape the company’s .NET strategy and launched the Roslyn compiler platform. Recognized as a Product 50 Winner and recipient of the PM Award for Technical Strategist, she also advises and invests in high-growth technology companies. In the episode, Richie and Karen explore the evolving role of AI agents in sales, marketing, and support, the distinction between chatbots, co-pilots, and autonomous agents, the importance of data quality and context, the concept of hybrid teams, the future of AI-driven business processes, and much more. Links Mentioned in the Show: Hubspot Breeze AgentsConnect with KarenWebinar: Pricing & Monetizing Your AI Products with Sam Lee, VP of Pricing Strategy & Product Operations at HubSpotRelated Episode: Enterprise AI Agents with Jun Qian, VP of Generative AI Services at OracleRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business