talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn

Machine Learning and AI for Absolute Beginners

Explore AI and Machine Learning fundamentals, tools, and applications in this beginner-friendly guide. Learn to build models in Python and understand AI ethics. Key Features Covers AI fundamentals, Machine Learning, and Python model-building Provides a clear, step-by-step guide to learning AI techniques Explains ethical considerations and the future role of AI in society Book Description This book is an ideal starting point for anyone interested in Artificial Intelligence and Machine Learning. It begins with the foundational principles of AI, offering a deep dive into its history, building blocks, and the stages of development. Readers will explore key AI concepts and gradually transition to practical applications, starting with machine learning algorithms such as linear regression and k-nearest neighbors. Through step-by-step Python tutorials, the book helps readers build and implement models with hands-on experience. As the book progresses, readers will dive into advanced AI topics like deep learning, natural language processing (NLP), and generative AI. Topics such as recommender systems and computer vision demonstrate the real-world applications of AI technologies. Ethical considerations and privacy concerns are also addressed, providing insight into the societal impact of these technologies. By the end of the book, readers will have a solid understanding of both the theory and practice of AI and Machine Learning. The final chapters provide resources for continued learning, ensuring that readers can continue to grow their AI expertise beyond the book. What you will learn Understand key AI and ML concepts and how they work together Build and apply machine learning models from scratch Use Python to implement AI techniques and improve model performance Explore essential AI tools and frameworks used in the industry Learn the importance of data and data preparation in AI development Grasp the ethical considerations and the future of AI in work Who this book is for This book is ideal for beginners with no prior knowledge of AI or Machine Learning. It is tailored to those who wish to dive into these topics but are not yet familiar with the terminology or techniques. There are no prerequisites, though basic programming knowledge can be helpful. The book caters to a wide audience, from students and hobbyists to professionals seeking to transition into AI roles. Readers should be enthusiastic about learning and exploring AI applications for the future.

Combining LLMs with enterprise knowledge bases is creating powerful new agents that can transform business operations. These systems are dramatically improving on traditional chatbots by understanding context, following conversations naturally, and accessing up-to-date information. But how do you effectively manage the knowledge that powers these agents? What governance structures need to be in place before deployment? And as we look toward a future with physical AI and robotics, what fundamental computing challenges must we solve to ensure these technologies enhance rather than complicate our lives? Jun Qian is an accomplished technology leader with extensive experience in artificial intelligence and machine learning. Currently serving as Vice President of Generative AI Services at Oracle since May 2020, Jun founded and leads the Engineering and Science group, focusing on the creation and enhancement of Generative AI services and AI Agents. Previously held roles include Vice President of AI Science and Development at Oracle, Head of AI and Machine Learning at Sift, and Principal Group Engineering Manager at Microsoft, where Jun co-founded Microsoft Power Virtual Agents. Jun's career also includes significant contributions as the Founding Manager of Amazon Machine Learning at AWS and as a Principal Investigator at Verizon. In the episode, Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, the challenges and advancements in chatbot technology, the importance of data management and security in AI, and the future of AI in computing and robotics, and much more. Links Mentioned in the Show: OracleConnect with JunCourse: Introduction to AI AgentsJun at DataCamp RADARRelated Episode: A Framework for GenAI App and Agent Development with Jerry Liu, CEO at LlamaIndexRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Send us a text What happens when AI hype collides with enterprise reality? Tim Leers, Global Generative & Agentic AI Lead at Dataroots, pulls back the curtain on what's actually working—and what's not—in enterprise AI deployment today.

We begin by examining why companies like Klarna publicly announced replacing customer service teams with AI, only to quietly backtrack months later when quality suffered. This pattern of inflated expectations followed by reality checks has become common, creating what Tim calls "AI theater" – impressive demos with minimal business impact.

The conversation tackles the often misunderstood concept of "agentic AI." Rather than viewing it as a specific technology, Tim frames agency as fundamentally about delegated authority – the ability to trust AI systems with meaningful responsibilities. However, this delegation requires contextual intelligence—providing the right data at the right time—which most organizations struggle to implement effectively.

"Models are commodities, data is your moat," Tim explains, arguing that proprietary business context will remain the key differentiator even as AI models continue advancing. This perspective challenges the conventional wisdom that focuses primarily on model capabilities rather than data infrastructure.

Perhaps most valuably, Tim outlines three pillars for successful enterprise AI: contextual intelligence, continuous improvement (designing systems that evolve with changing business contexts), and human-AI collaboration. This framework shifts focus from technology deployment to sustainable business value creation.

The discussion concludes with eight practical lessons for organizations implementing generative AI, from avoiding the temptation to build proprietary models to recognizing that teaching employees to prompt effectively isn't sufficient for enterprise-wide adoption. Each lesson reinforces a central theme: successful AI implementation requires designing for change rather than building rigid systems that quickly become obsolete.

Whether you're a technical leader evaluating vendor claims or a business executive trying to separate AI reality from fantasy, this episode provides the practical guidance needed to move beyond the hype cycle toward meaningful implementation.

Data science continues to evolve in the age of AI, but is it still the 'sexiest job of the 21st century'? While generative AI has transformed the landscape, it hasn't replaced data scientists—instead, it's created more demand for their skills. Data professionals now incorporate AI into their workflows to boost efficiency, analyze data faster, and communicate insights more effectively. But with these technological advances come questions: How should you adapt your skills to stay relevant? What's the right balance between traditional data science techniques and new AI capabilities? And as roles like analytics engineer and machine learning engineer emerge, how do you position yourself for success in this rapidly changing field? Dawn Choo is the Co-Founder of Interview Master, a platform designed to streamline technical interview preparation. With a foundation in data science, financial analysis, and product strategy, she brings a cross-disciplinary lens to building data-driven tools that improve hiring outcomes. Her career spans roles at leading tech firms, including ClassDojo, Patreon, and Instagram, where she delivered insights to support product development and user engagement. Earlier, Dawn held analytical and engineering positions at Amazon and Bank of America, focusing on business intelligence, financial modeling, and risk analysis. She began her career at Facebook as a marketing analyst and continues to be a visible figure in the data science community—offering practical guidance to job seekers navigating technical interviews and career transitions. In the episode, Richie and Dawn explore the evolving role of data scientists in the age of AI, the impact of generative AI on workflows, the importance of foundational skills, and the nuances of the hiring process in data science. They also discuss the integration of AI in products and the future of personalized AI models, and much more. Links Mentioned in the Show: Interview MasterConnect with DawnDawn’s Newsletter: Ask Data DawnGet Certified: AI Engineer for Data Scientists Associate CertificationRelated Episode: How To Get Hired As A Data Or AI Engineer with Deepak Goyal, CEO & Founder at Azurelib AcademyRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Problemas antigos. Novas Ferramentas. Neste episódio, vamos mostrar como você pode utilizar a Inteligência Artificial Generativa (GenAI) para te auxiliar na análise dados não-estruturados e aumentar sua produtividade. Reunimos Felipe Fiamozzini e Lara Marinelli, especialistas da Bain & Company que vivem o dia a dia da área, para explorar os desafios que existiam antes da chegada da GenAI, os métodos e frameworks recomendados e como o ciclo da análise de dados está sendo adaptado com essas novas tecnologias. Também discutimos o papel das lideranças nesse cenário de transformação e damos dicas práticas para quem está começando na área de dados e quer desenvolver habilidades em GenAI. Vem com a gente entender como extrair valor de dados não-estruturados com o apoio da GenAI! Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Convidados: Felipe Fiamozzini - Sócio na Bain Company focado em dados e IA Lara Marinelli - Gerente de Machine Learning Engineering Nossa Bancada Data Hackers: Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart. Gabriel Lages - Co-founder da Data Hackers e Data & Analytics Sr. Director na Hotmart.

The structured data that powers business decisions is more complex than the sequences processed by traditional AI models. Enterprise databases with their interconnected tables of customers, products, and transactions form intricate graphs that contain valuable predictive signals. But how can we effectively extract insights from these complex relationships without extensive manual feature engineering? Graph transformers are revolutionizing this space by treating databases as networks and learning directly from raw data. What if you could build models in hours instead of months while achieving better accuracy? How might this technology change the role of data scientists, allowing them to focus on business impact rather than data preparation? Could this be the missing piece that brings the AI revolution to predictive modeling? Jure Leskovec is a Professor of Computer Science at Stanford University, where he is affiliated with the Stanford AI Lab, the Machine Learning Group, and the Center for Research on Foundation Models. Previously, he served as Chief Scientist at Pinterest and held a research role at the Chan Zuckerberg Biohub. He is also a co-founder of Kumo.AI, a machine learning startup. Leskovec has contributed significantly to the development of Graph Neural Networks and co-authored PyG, a widely-used library in the field. Research from his lab has supported public health efforts during the COVID-19 pandemic and informed product development at companies including Facebook, Pinterest, Uber, YouTube, and Amazon. His work has received several recognitions, including the Microsoft Research Faculty Fellowship (2011), the Okawa Research Award (2012), the Alfred P. Sloan Fellowship (2012), the Lagrange Prize (2015), and the ICDM Research Contributions Award (2019). His research spans social networks, machine learning, data mining, and computational biomedicine, with a focus on drug discovery. He has received 12 best paper awards and five 10-year Test of Time awards at leading academic conferences. In the episode, Richie and Jure explore the need for a foundation model for enterprise data, the limitations of current AI models in predictive tasks, the potential of graph transformers for business data, and the transformative impact of relational foundation models on machine learning workflows, and much more. Links Mentioned in the Show: Jure’s PublicationsKumo AIConnect with JureCourse - Transformer Models with PyTorchRelated Episode: High Performance Generative AI Applications with Ram Sriharsha, CTO at PineconeRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

What does AI transformation really look like inside a 180-year-old company? In this episode of Data Unchained, we are joined by Younes Hairej, founder and CEO of Aokumo Inc, a trailblazing company helping enterprises in Japan and beyond bridge the gap between business intent and AI execution. From deploying autonomous AI agents that eliminate the need for dashboards and YAML, to revitalizing siloed, analog systems in manufacturing, Younes shares what it takes to modernize legacy infrastructure without starting over. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US

ArtificialIntelligence #EnterpriseAI #AITransformation #Kubernetes #DevOps #GenAI #DigitalTransformation #OpenSourceAI #DataInfrastructure #BusinessInnovation #AIInJapan #LegacyModernization #MetadataStrategy #AIOrchestration #CloudNative #AIAutomation #DataGovernance #MLOps #IntelligentAgents #TechLeadership

Hosted on Acast. See acast.com/privacy for more information.

Send us a text She’s the legal powerhouse behind IBM’s AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM’s Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what’s now, what’s next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law. 📍 Timestamps:  • 01:00 Christina Montgomery!  • 04:36 My Daughter and the Bar  • 08:36 Chief Privacy and Trust Officer  • 11:37 Keeping IBM Out of Trouble  • 13:34 Client Conversations  • 16:23 Where to Be Bullish and Bearish  • 20:52 The Risks of LLMs  • 24:21 NIST and AI Alliance  • 28:26 AI Regulation  • 36:13 Synthetic Data  • 38:00 Misconceptions  • 40:07 Worries  • 41:27 The Path to AI  • 43:13 Aspiring Lawyers 🔗 Christina on LinkedIn 🌐 IBM AI Ethics Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Generative AI for Software Development

In just a few short years, AI has transformed software development, and snazzy new tools continue to arrive, with no let-up in sight. How, as a software engineer, product builder, or CTO, do you keep up? This practical book is the result of Sergio Pereira's mission to test every AI tool he could find and provide practitioners with much-needed guidance through the commotion. Generative AI for Software Development focuses on AI tool comparisons, practical workflows, and real-world case studies, with each chapter encompassing critical evaluations of the tools, their use cases, and their limitations. While product reviews are always relevant, the book goes further and delivers to readers a coherent framework for evaluating the tools and workflows of the future, which will continue to complicate the work of software development. Learn how code generation and autocompletion assistants are reshaping the developer experience Discover a consistent method for rating code-generation tools based on real-world coding challenges Explore the GenAI tools transforming UI/UX design and frontend development Learn how AI is streamlining code reviews and bug detection Review tools that are simplifying software testing and QA Explore AI for documentation and technical writing Understand how modern LLMs have redefined what chatbots can do

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Sonar — Code quality and code security for ALL code. — Steve Yegge⁠ is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond. In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built. We also discuss:  • The “interview anti-loop” at Google and the problems with interviews • An inside look at how Amazon operated in the early days before microservices   • What Steve liked about working at Grab • Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms • Why Steve came out of retirement • The emerging role of the “AI Fixer” in engineering teams • How AI-assisted coding is deceptively simple, but extremely difficult to steer • Steve’s advice for using AI coding tools and overcoming common challenges • Predictions about the future of developer productivity • A case for AI creating a real meritocracy  • And much more! — Timestamps (00:00) Intro (04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews (07:44) Work trials and why entry-level jobs aren’t posted for big tech companies (09:50) An overview of the difficult process of landing a job as a software engineer (15:48) Steve’s thoughts on Grab and why he loved it (20:22) Insights from the Google platforms rant that was picked up by TechCrunch (27:44) The impact of the Google platforms rant (29:40) What Steve discovered about print ads not working for Google  (31:48) What went wrong with Google+ and Wave (35:04) How Amazon has changed and what Google is doing wrong (42:50) Why Steve came out of retirement  (45:16) Insights from “the death of the junior developer” and the impact of AI (53:20) The new role Steve predicts will emerge  (54:52) Changing business cycles (56:08) Steve’s new book about vibe coding and Gergely’s experience  (59:24) Reasons people struggle with AI tools (1:02:36) What will developer productivity look like in the future (1:05:10) The cost of using coding agents  (1:07:08) Steve’s advice for vibe coding (1:09:42) How Steve used AI tools to work on his game Wyvern  (1:15:00) Why Steve thinks there will actually be more jobs for developers  (1:18:29) A comparison between game engines and AI tools (1:21:13) Why you need to learn AI now (1:30:08) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ The full circle of developer productivity with Steve Yegge •⁠ Inside Amazon’s engineering culture •⁠ Vibe coding as a software engineer •⁠ AI engineering in the real world •⁠ The AI Engineering stack •⁠ Inside Sourcegraph’s engineering culture— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe