The AI landscape is evolving at breakneck speed, with new capabilities emerging quarterly that redefine what's possible. For professionals across industries, this creates a constant need to reassess workflows and skills. How do you stay relevant when the technology keeps leapfrogging itself? What happens to traditional roles when AI can increasingly handle complex tasks that once required specialized expertise? With product-market fit becoming a moving target and new positions like forward-deployed engineers emerging, understanding how to navigate this shifting terrain is crucial. The winners won't just be those who adopt AI—but those who can continuously adapt as it evolves. Tomasz Tunguz is a General Partner at Theory Ventures, a $235m early-stage venture capital firm. He blogs at tomtunguz.com & co-authored Winning with Data. He has worked or works with Looker, Kustomer, Monte Carlo, Dremio, Omni, Hex, Spot, Arbitrum, Sui & many others. He was previously the product manager for Google's social media monetization team, including the Google-MySpace partnership, and managed the launches of AdSense into six new markets in Europe and Asia. Before Google, Tunguz developed systems for the Department of Homeland Security at Appian Corporation. In the episode, Richie and Tom explore the rapid investment in AI, the evolution of AI models like Gemini 3, the role of AI agents in productivity, the shifting job market, the impact of AI on customer success and product management, and much more. Links Mentioned in the Show: Theory VenturesConnect with TomTom’s BlogGavin Baker on MediumAI-Native Course: Intro to AI for WorkRelated Episode: Data & AI Trends in 2024, with Tom Tunguz, General Partner at Theory VenturesRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
talk-data.com
Topic
Cyber Security
214
tagged
Activity Trend
Top Events
Brought to You By: • Statsig — The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here. • Linear — The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear. — As software engineers, what should we know about writing secure code? Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases. We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense. If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints. — Timestamps (00:00) Intro (02:31) What is penetration testing? (06:23) Who owns code security: devs or security teams? (14:42) What is code security? (17:10) Code security basics for devs (21:35) Advanced security challenges (24:36) SCA testing (25:26) The CVE Program (29:39) The State of Code Security report (32:02) Code quality vs security (35:20) Dev machines as a security vulnerability (37:29) Common security tools (42:50) Dynamic security tools (45:01) AI security reviews: what are the limits? (47:51) AI-generated code risks (49:21) More code: more vulnerabilities (51:44) AI’s impact on code security (58:32) Common misconceptions of the security industry (1:03:05) When is security “good enough?” (1:05:40) Johannes’s favorite programming language — The Pragmatic Engineer deepdives relevant for this episode: • What is Security Engineering? • Mishandled security vulnerability in Next.js • Okta Schooled on Its Security Practices — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
In this episode of Data Unchained we sit down with Ellison Anne Williams, Founder and CEO of Enveil, to explore one of the most important questions in modern technology: how do you protect your data from AI. In this conversation, Ellison Anne breaks down how data can be used securely across environments you do not own, trust, or control, why AI models silently leak sensitive information, and how encrypted computation is transforming the future of AI, cybersecurity, finance, and national security. Learn how privacy enhancing technologies are reshaping enterprise data protection, how secure AI evaluation works, and what technologies organizations must adopt to stay ahead of the next wave of cyber risk. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
Today, we’re joined by Chris McHenry, Chief Product Officer at Aviatrix, a cloud native network security company. We talk about: Prerequisites to driving operational efficiency with agentic AIBridging the gap between security & engineering so organizations can go fast & be secure What’s required in order for agentic AI to create a magical momentWith cloud powering so much of our society, the need to get security right The security challenges introduced by agentic AI apps, including new attack vectors
In this episode, Conor and Bryce record live from NDC TechTown in Norway! We interview Vittorio Romeo and JF Bastien about C++, training, their talks and more! Link to Episode 259 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)Socials ADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBryce Adelstein Lelbach: TwitterAbout the Guests: Vittorio is a passionate C++ expert with over a decade of professional and personal experience. His expertise covers library development, high-performance financial backends, game development, open-source contributions, and active participation in ISO C++ standardization. He is the coauthor of "Embracing Modern C++ Safely" and is a speaker at over 25 international conferences. JF Bastien has worked on hardware, compilers, security, performance, web browsers, and airplanes. As chair of the C++ language evolution working group and co-designer of WebAssembly, his contributions have helped shape modern software development. Show Notes Date Recorded: 2025-09-24 Date Released: 2025-11-07 camomilla by Vittorio Romeoromeo.trainingRoku rostdASDP Episode 136: 🇬🇧 C++ On Sea Live 🇬🇧 CppCast, TLB HIT & Two's Complement!TLB.hitJAXOpenXLA[LATTE '22] Chris Leary: X-istentialism: Supercomputers, Silicon Atoms, and the Science Between!Guest Lecture - XLS (Chris Leary)Project DenverIntel pays NVIDIA $1.5BNDC TechTown JF Talk(char)0 = 0; - What Does the C++ Programmer Intend With This Code? - JF Bastien - C++ on Sea 2023Keynote: Safety and Security: The Future of C++ - JF Bastien - CppNow 2023All the Safeties: Safety in C++ - Sean Parent - CppNow 2023NDC TechTown Vittorio Romeo TalkMore Speed & Simplicity: Practical Data-Oriented Design in C++ - Vittorio Romeo - CppCon 2025CppCon 2014: Mike Acton "Data-Oriented Design and C++"Intro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Free Download / Stream: http://bit.ly/l-miss-you Music promoted by Audio Library https://youtu.be/iYYxnasvfx8
In this episode of Data Skeptic's Recommender Systems series, Kyle sits down with Aditya Chichani, a senior machine learning engineer at Walmart, to explore the darker side of recommendation algorithms. The conversation centers on shilling attacks—a form of manipulation where malicious actors create multiple fake profiles to game recommender systems, either to promote specific items or sabotage competitors. Aditya, who researched these attacks during his undergraduate studies at SPIT before completing his master's in computer science with a data science specialization at UC Berkeley, explains how these vulnerabilities emerge particularly in collaborative filtering systems. From promoting a friend's ska band on Spotify to inflating product ratings on e-commerce platforms, shilling attacks represent a significant threat in an industry where approximately 4% of reviews are fake, translating to $800 billion in annual sales in the US alone. The discussion delves deep into collaborative filtering, explaining both user-user and item-item approaches that create similarity matrices to predict user preferences. However, these systems face various shilling attacks of increasing sophistication: random attacks use minimal information with average ratings, while segmented attacks strategically target popular items (like Taylor Swift albums) to build credibility before promoting target items. Bandwagon attacks focus on highly popular items to connect with genuine users, and average attacks leverage item rating knowledge to appear authentic. User-user collaborative filtering proves particularly vulnerable, requiring as few as 500 fake profiles to impact recommendations, while item-item filtering demands significantly more resources. Aditya addresses detection through machine learning techniques that analyze behavioral patterns using methods like PCA to identify profiles with unusually high correlation and suspicious rating consistency. However, this remains an evolving challenge as attackers adapt strategies, now using large language models to generate more authentic-seeming fake reviews. His research with the MovieLens dataset tested detection algorithms against synthetic attacks, highlighting how these concerns extend to modern e-commerce systems. While companies rarely share attack and detection data publicly to avoid giving attackers advantages, academic research continues advancing both offensive and defensive strategies in recommender systems security.
The promise of AI in enterprise settings is enormous, but so are the privacy and security challenges. How do you harness AI's capabilities while keeping sensitive data protected within your organization's boundaries? Private AI—using your own models, data, and infrastructure—offers a solution, but implementation isn't straightforward. What governance frameworks need to be in place? How do you evaluate non-deterministic AI systems? When should you build in-house versus leveraging cloud services? As data and software teams evolve in this new landscape, understanding the technical requirements and workflow changes is essential for organizations looking to maintain control over their AI destiny. Manasi Vartak is Chief AI Architect and VP of Product Management (AI Platform) at Cloudera. She is a product and AI leader with more than a decade of experience at the intersection of AI infrastructure, enterprise software, and go-to-market strategy. At Cloudera, she leads product and engineering teams building low-code and high-code generative AI platforms, driving the company’s enterprise AI strategy and enabling trusted AI adoption across global organizations. Before joining Cloudera through its acquisition of Verta, Manasi was the founder and CEO of Verta, where she transformed her MIT research into enterprise-ready ML infrastructure. She scaled the company to multi-million ARR, serving Fortune 500 clients in finance, insurance, and capital markets, and led the launch of enterprise MLOps and GenAI products used in mission-critical workloads. Manasi earned her PhD in Computer Science from MIT, where she pioneered model management systems such as ModelDB — foundational work that influenced the development of tools like MLflow. Earlier in her career, she held research and engineering roles at Twitter, Facebook, Google, and Microsoft. In the episode, Richie and Manasi explore AI's role in financial services, the challenges of AI adoption in enterprises, the importance of data governance, the evolving skills needed for AI development, the future of AI agents, and much more. Links Mentioned in the Show: ClouderaCloudera Evolve ConferenceCloudera Agent StudioConnect with ManasiCourse: Introduction to AI AgentsRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Summary In this episode of the Data Engineering Podcast Matt Topper, president of UberEther, talks about the complex challenge of identity, credentials, and access control in modern data platforms. With the shift to composable ecosystems, integration burdens have exploded, fracturing governance and auditability across warehouses, lakes, files, vector stores, and streaming systems. Matt shares practical solutions, including propagating user identity via JWTs, externalizing policy with engines like OPA/Rego and Cedar, and using database proxies for native row/column security. He also explores catalog-driven governance, lineage-based label propagation, and OpenTDF for binding policies to data objects. The conversation covers machine-to-machine access, short-lived credentials, workload identity, and constraining access by interface choke points, as well as lessons from Zanzibar-style policy models and the human side of enforcement. Matt emphasizes the need for trust composition - unifying provenance, policy, and identity context - to answer questions about data access, usage, and intent across the entire data path.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Matt Topper about the challenges of managing identity and access controls in the context of data systemsInterview IntroductionHow did you get involved in the area of data management?The data ecosystem is a uniquely challenging space for creating and enforcing technical controls for identity and access control. What are the key considerations for designing a strategy for addressing those challenges?For data acess the off-the-shelf options are typically on either extreme of too coarse or too granular in their capabilities. What do you see as the major factors that contribute to that situation?Data governance policies are often used as the primary means of identifying what data can be accesssed by whom, but translating that into enforceable constraints is often left as a secondary exercise. How can we as an industry make that a more manageable and sustainable practice?How can the audit trails that are generated by data systems be used to inform the technical controls for identity and access?How can the foundational technologies of our data platforms be improved to make identity and authz a more composable primitive?How does the introduction of streaming/real-time data ingest and delivery complicate the challenges of security controls?What are the most interesting, innovative, or unexpected ways that you have seen data teams address ICAM?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ICAM?What are the aspects of ICAM in data systems that you are paying close attention to?What are your predictions for the industry adoption or enforcement of those controls?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links UberEtherJWT == JSON Web TokenOPA == Open Policy AgentRegoPingIdentityOktaMicrosoft EntraSAML == Security Assertion Markup LanguageOAuthOIDC == OpenID ConnectIDP == Identity ProviderKubernetesIstioAmazon CEDAR policy languageAWS IAMPII == Personally Identifiable InformationCISO == Chief Information Security OfficerOpenTDFOpenFGAGoogle ZanzibarRisk Management FrameworkModel Context ProtocolGoogle Data ProjectTPM == Trusted Platform ModulePKI == Public Key InfrastructurePassskeysDuckLakePodcast EpisodeAccumuloJDBCOpenBaoHashicorp VaultLDAPThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Summary In this episode Kate Shaw, Senior Product Manager for Data and SLIM at SnapLogic, talks about the hidden and compounding costs of maintaining legacy systems—and practical strategies for modernization. She unpacks how “legacy” is less about age and more about when a system becomes a risk: blocking innovation, consuming excess IT time, and creating opportunity costs. Kate explores technical debt, vendor lock-in, lost context from employee turnover, and the slippery notion of “if it ain’t broke,” especially when data correctness and lineage are unclear. Shee digs into governance, observability, and data quality as foundations for trustworthy analytics and AI, and why exit strategies for system retirement should be planned from day one. The discussion covers composable architectures to avoid monoliths and big-bang migrations, how to bridge valuable systems into AI initiatives without lock-in, and why clear success criteria matter for AI projects. Kate shares lessons from the field on discovery, documentation gaps, parallel run strategies, and using integration as the connective tissue to unlock data for modern, cloud-native and AI-enabled use cases. She closes with guidance on planning migrations, defining measurable outcomes, ensuring lineage and compliance, and building for swap-ability so teams can evolve systems incrementally instead of living with a “bowl of spaghetti.”
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Kate Shaw about the true costs of maintaining legacy systemsInterview IntroductionHow did you get involved in the area of data management?What are your crtieria for when a given system or service transitions to being "legacy"?In order for any service to survive long enough to become "legacy" it must be serving its purpose and providing value. What are the common factors that prompt teams to deprecate or migrate systems?What are the sources of monetary cost related to maintaining legacy systems while they remain operational?Beyond monetary cost, economics also have a concept of "opportunity cost". What are some of the ways that manifests in data teams who are maintaining or migrating from legacy systems?How does that loss of productivity impact the broader organization?How does the process of migration contribute to issues around data accuracy, reliability, etc. as well as contributing to potential compromises of security and compliance?Once a system has been replaced, it needs to be retired. What are some of the costs associated with removing a system from service?What are the most interesting, innovative, or unexpected ways that you have seen teams address the costs of legacy systems and their retirement?What are the most interesting, unexpected, or challenging lessons that you have learned while working on legacy systems migration?When is deprecation/migration the wrong choice?How have evolutionary architecture patterns helped to mitigate the costs of system retirement?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links SnapLogicSLIM == SnapLogic Intelligent ModernizerOpportunity CostSunk Cost FallacyData GovernanceEvolutionary ArchitectureThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Send us a text We're joined by Douwe Kiela, CEO of Contextual.ai and pioneer in RAG research. From deploying AI agents at Fortune 500 companies to shedding light on data privacy and security, Douwe shares his expertise and insights on how to make data simple, effective, and secure. 00:46 Introducing Douwe Kiela 01:37 RAG - Here to Stay or Go? 06:59 LLMs with Context 08:20 Making AI Successful 10:34 Why Contextual AI? 17:18 LLM versus SLMs 20:28 Speed over Perfection 22:07 Hallucinations 26:02 Making AI Easy to Consume 28:50 Defining an Agent 32:53 Reaching Contextual AI 33:14 The Contrarian View 34:37 The Risks of AI 36:53 For Fun
LinkedIn: linkedin.com/in/douwekiela Website: https://contextual.ai/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Send us a text Replay Episode: Python, Anaconda, and the AI Frontier with Peter Wang Peter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda’s new AI incubator, the future of GenAI, and why Python isn’t just “still a thing”… it’s the thing. From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today. Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda’s Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership Style
Linkedin: linkedin.com/in/pzwang Website: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
The manufacturing floor is undergoing a technological revolution with industrial AI at its center. From predictive maintenance to quality control, AI is transforming how products are designed, produced, and maintained. But implementing these technologies isn't just about installing sensors and software—it's about empowering your workforce to embrace new tools and processes. How do you overcome AI hesitancy among experienced workers? What skills should your team develop to make the most of these new capabilities? And with limited resources, how do you prioritize which AI applications will deliver the greatest impact for your specific manufacturing challenges? The answers might be simpler than you think. Barbara Humpton is President and CEO of Siemens Corporation, responsible for strategy and engagement in Siemens’ largest market. Under her leadership, Siemens USA operates across all 50 states and Puerto Rico with 45,000 employees and generated $21.1 billion in revenue in fiscal year 2024. She champions the role of technology in expanding what’s humanly possible and is a strong advocate for workforce development, mentorship, and building sustainable work-life integration. Previously, she was President and CEO of Siemens Government Technologies, leading delivery of Siemens’ products and services to U.S. federal agencies. Before joining Siemens in 2011, she held senior roles at Booz Allen Hamilton and Lockheed Martin, where she oversaw programs in national security, biometrics, border protection, and critical infrastructure, including the FBI’s Next Generation Identification and TSA’s Transportation Workers’ Identification Credential. Olympia Brikis is a seasoned technology and business leader with over a decade of experience in AI research. As the Technology and Engineering Director for Siemens' Industrial AI Research in the U.S., she leads AI strategy, technology roadmapping, and R&D for next-gen AI products. Olympia has a strong track record in developing Generative AI products that integrate industrial and digital ecosystems, driving real-world business impact. She is a recognized thought leader with numerous patents and peer-reviewed publications in AI for manufacturing, predictive analytics, and digital twins. Olympia actively engages with executives, policymakers, and AI practitioners on AI's role in enterprise strategy and workforce transformation. With a background in Computer Science from LMU Munich and an MBA from Wharton, she bridges AI research, product strategy, and enterprise adoption, mentoring the next generation of AI leaders. In the episode, Richie, Barbara, and Olympia explore the transformative power of AI in manufacturing, from predictive maintenance to digital twins, the role of industrial AI in enhancing productivity, the importance of empowering workers with new technology, real-world applications, overcoming AI hesitancy, and much more. Links Mentioned in the Show: Siemens Industrial AI SuiteConnect with Barbara and OlympiaCourse: Implementing AI Solutions in BusinessRelated Episode: Master Your Inner Game to Avoid Burnout with Klaus Kleinfeld, Former CEO at Alcoa and SiemensRewatch RADAR AI where...
What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn
Combining LLMs with enterprise knowledge bases is creating powerful new agents that can transform business operations. These systems are dramatically improving on traditional chatbots by understanding context, following conversations naturally, and accessing up-to-date information. But how do you effectively manage the knowledge that powers these agents? What governance structures need to be in place before deployment? And as we look toward a future with physical AI and robotics, what fundamental computing challenges must we solve to ensure these technologies enhance rather than complicate our lives? Jun Qian is an accomplished technology leader with extensive experience in artificial intelligence and machine learning. Currently serving as Vice President of Generative AI Services at Oracle since May 2020, Jun founded and leads the Engineering and Science group, focusing on the creation and enhancement of Generative AI services and AI Agents. Previously held roles include Vice President of AI Science and Development at Oracle, Head of AI and Machine Learning at Sift, and Principal Group Engineering Manager at Microsoft, where Jun co-founded Microsoft Power Virtual Agents. Jun's career also includes significant contributions as the Founding Manager of Amazon Machine Learning at AWS and as a Principal Investigator at Verizon. In the episode, Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, the challenges and advancements in chatbot technology, the importance of data management and security in AI, and the future of AI in computing and robotics, and much more. Links Mentioned in the Show: OracleConnect with JunCourse: Introduction to AI AgentsJun at DataCamp RADARRelated Episode: A Framework for GenAI App and Agent Development with Jerry Liu, CEO at LlamaIndexRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Brought to You By: • WorkOS — The modern identity platform for B2B SaaS. • Statsig — The unified platform for flags, analytics, experiments, and more. • Sonar — Code quality and code security for ALL code. — In this episode of The Pragmatic Engineer, I sit down with Peter Walker, Head of Insights at Carta, to break down how venture capital and startups themselves are changing. We go deep on the numbers: why fewer companies are getting funded despite record VC investment levels, how hiring has shifted dramatically since 2021, and why solo founders are on the rise even though most VCs still prefer teams. We also unpack the growing emphasis on ARR per FTE, what actually happens in bridge and down rounds, and why the time between fundraising rounds has stretched far beyond the old 18-month cycle. We cover what all this means for engineers: what to ask before joining a startup, how to interpret valuation trends, and what kind of advisor roles startups are actually looking for. If you work at a startup, are considering joining one, or just want a clearer picture of how venture-backed companies operate today, this episode is for you. — Timestamps (00:00) Intro (01:21) How venture capital works and the goal of VC-backed startups (03:10) Venture vs. non-venture backed businesses (05:59) Why venture-backed companies prioritize growth over profitability (09:46) A look at the current health of venture capital (13:19) The hiring slowdown at startups (16:00) ARR per FTE: The new metric VCs care about (21:50) Priced seed rounds vs. SAFEs (24:48) Why some founders are incentivized to raise at high valuations (29:31) What a bridge round is and why they can signal trouble (33:15) Down rounds and how optics can make or break startups (36:47) Why working at startups offers more ownership and learning (37:47) What the data shows about raising money in the summer (41:45) The length of time it takes to close a VC deal (44:29) How AI is reshaping startup formation, team size, and funding trends (48:11) Why VCs don’t like solo founders (50:06) How employee equity (ESOPs) work (53:50) Why acquisition payouts are often smaller than employees expect (55:06) Deep tech vs. software startups: (57:25) Startup advisors: What they do, how much equity they get (1:02:08) Why time between rounds is increasing and what that means (1:03:57) Why it’s getting harder to get from Seed to Series A (1:06:47) A case for quitting (sometimes) (1:11:40) How to evaluate a startup before joining as an engineer (1:13:22) The skills engineers need to thrive in a startup environment (1:16:04) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode:
— See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • Statsig — The unified platform for flags, analytics, experiments, and more. • Sonar — Code quality and code security for ALL code. — Steve Yegge is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond. In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built. We also discuss: • The “interview anti-loop” at Google and the problems with interviews • An inside look at how Amazon operated in the early days before microservices • What Steve liked about working at Grab • Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms • Why Steve came out of retirement • The emerging role of the “AI Fixer” in engineering teams • How AI-assisted coding is deceptively simple, but extremely difficult to steer • Steve’s advice for using AI coding tools and overcoming common challenges • Predictions about the future of developer productivity • A case for AI creating a real meritocracy • And much more! — Timestamps (00:00) Intro (04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews (07:44) Work trials and why entry-level jobs aren’t posted for big tech companies (09:50) An overview of the difficult process of landing a job as a software engineer (15:48) Steve’s thoughts on Grab and why he loved it (20:22) Insights from the Google platforms rant that was picked up by TechCrunch (27:44) The impact of the Google platforms rant (29:40) What Steve discovered about print ads not working for Google (31:48) What went wrong with Google+ and Wave (35:04) How Amazon has changed and what Google is doing wrong (42:50) Why Steve came out of retirement (45:16) Insights from “the death of the junior developer” and the impact of AI (53:20) The new role Steve predicts will emerge (54:52) Changing business cycles (56:08) Steve’s new book about vibe coding and Gergely’s experience (59:24) Reasons people struggle with AI tools (1:02:36) What will developer productivity look like in the future (1:05:10) The cost of using coding agents (1:07:08) Steve’s advice for vibe coding (1:09:42) How Steve used AI tools to work on his game Wyvern (1:15:00) Why Steve thinks there will actually be more jobs for developers (1:18:29) A comparison between game engines and AI tools (1:21:13) Why you need to learn AI now (1:30:08) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The full circle of developer productivity with Steve Yegge • Inside Amazon’s engineering culture • Vibe coding as a software engineer • AI engineering in the real world • The AI Engineering stack • Inside Sourcegraph’s engineering culture— See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • Statsig — The unified platform for flags, analytics, experiments, and more. • Sonar — Code quality and code security for ALL code. — What happens when a company goes all in on AI? At Shopify, engineers are expected to utilize AI tools, and they’ve been doing so for longer than most. Thanks to early access to models from GitHub Copilot, OpenAI, and Anthropic, the company has had a head start in figuring out what works. In this live episode from LDX3 in London, I spoke with Farhan Thawar, VP of Engineering, about how Shopify is building with AI across the entire stack. We cover the company’s internal LLM proxy, its policy of unlimited token usage, and how interns help push the boundaries of what’s possible. In this episode, we cover: • How Shopify works closely with AI labs • The story behind Shopify’s recent Code Red • How non-engineering teams are using Cursor for vibecoding • Tobi Lütke’s viral memo and Shopify’s expectations around AI • A look inside Shopify’s LLM proxy—used for privacy, token tracking, and more • Why Shopify places no limit on AI token spending • Why AI-first isn’t about reducing headcount—and why Shopify is hiring 1,000 interns • How Shopify’s engineering department operates and what’s changed since adopting AI tooling • Farhan’s advice for integrating AI into your workflow • And much more! — Timestamps (00:00) Intro (02:07) Shopify’s philosophy: “hire smart people and pair with them on problems” (06:22) How Shopify works with top AI labs (08:50) The recent Code Red at Shopify (10:47) How Shopify became early users of GitHub Copilot and their pivot to trying multiple tools (12:49) The surprising ways non-engineering teams at Shopify are using Cursor (14:53) Why you have to understand code to submit a PR at Shopify (16:42) AI tools' impact on SaaS (19:50) Tobi Lütke’s AI memo (21:46) Shopify’s LLM proxy and how they protect their privacy (23:00) How Shopify utilizes MCPs (26:59) Why AI tools aren’t the place to pinch pennies (30:02) Farhan’s projects and favorite AI tools (32:50) Why AI-first isn’t about freezing headcount and the value of hiring interns (36:20) How Shopify’s engineering department operates, including internal tools (40:31) Why Shopify added coding interviews for director-level and above hires (43:40) What has changed since Spotify added AI tooling (44:40) Farhan’s advice for implementing AI tools — The Pragmatic Engineer deepdives relevant for this episode: • How Shopify built its Live Globe for Black Friday • Inside Shopify's leveling split • Real-world engineering challenges: building Cursor • How Anthropic built Artifacts — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners • Statsig — The unified platform for flags, analytics, experiments, and more. • Graphite — The AI developer productivity platform. • Augment Code — AI coding assistant that pro engineering teams love — GitHub recently turned 17 years old—but how did it start, how has it evolved, and what does the future look like as AI reshapes developer workflows? In this episode of The Pragmatic Engineer, I’m joined by Thomas Dohmke, CEO of GitHub. Thomas has been a GitHub user for 16 years and an employee for 7. We talk about GitHub’s early architecture, its remote-first operating model, and how the company is navigating AI—from Copilot to agents. We also discuss why GitHub hires junior engineers, how the company handled product-market fit early on, and why being a beloved tool can make shipping harder at times. Other topics we discuss include: • How GitHub’s architecture evolved beyond its original Rails monolith • How GitHub runs as a remote-first company—and why they rarely use email • GitHub’s rigorous approach to security • Why GitHub hires junior engineers • GitHub’s acquisition by Microsoft • The launch of Copilot and how it’s reshaping software development • Why GitHub sees AI agents as tools, not a replacement for engineers • And much more! — Timestamps (00:00) Intro (02:25) GitHub’s modern tech stack (08:11) From cloud-first to hybrid: How GitHub handles infrastructure (13:08) How GitHub’s remote-first culture shapes its operations (18:00) Former and current internal tools including Haystack (21:12) GitHub’s approach to security (24:30) The current size of GitHub, including security and engineering teams (25:03) GitHub’s intern program, and why they are hiring junior engineers (28:27) Why AI isn’t a replacement for junior engineers (34:40) A mini-history of GitHub (39:10) Why GitHub hit product market fit so quickly (43:44) The invention of pull requests (44:50) How GitHub enables offline work (46:21) How monetization has changed at GitHub since the acquisition (48:00) 2014 desktop application releases (52:10) The Microsoft acquisition (1:01:57) Behind the scenes of GitHub’s quiet period (1:06:42) The release of Copilot and its impact (1:14:14) Why GitHub decided to open-source Copilot extensions (1:20:01) AI agents and the myth of disappearing engineering jobs (1:26:36) Closing — The Pragmatic Engineer deepdives relevant for this episode: • AI Engineering in the real world • The AI Engineering stack • How Linux is built with Greg Kroah-Hartman • Stacked Diffs (and why you should know about them) • 50 Years of Microsoft and developer tools — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Supported by Our Partners • Sonar — Code quality and code security for ALL code. • Statsig — The unified platform for flags, analytics, experiments, and more. • Augment Code — AI coding assistant that pro engineering teams love. — Kent Beck is one of the most influential figures in modern software development. Creator of Extreme Programming (XP), co-author of The Agile Manifesto, and a pioneer of Test-Driven Development (TDD), he’s shaped how teams write, test, and think about code. Now, with over five decades of programming experience, Kent is still pushing boundaries—this time with AI coding tools. In this episode of Pragmatic Engineer, I sit down with him to talk about what’s changed, what hasn’t, and why he’s more excited than ever to code. In our conversation, we cover: • Why Kent calls AI tools an “unpredictable genie”—and how he’s using them • Why Kent no longer has an emotional attachment to any specific programming language • The backstory of The Agile Manifesto—and why Kent resisted the word “agile” • An overview of XP (Extreme Programming) and how Grady Booch played a role in the name • Tape-to-tape experiments in Kent’s childhood that laid the groundwork for TDD • Kent’s time at Facebook and how he adapted to its culture and use of feature flags • And much more! — Timestamps (00:00) Intro (02:27) What Kent has been up to since writing Tidy First (06:05) Why AI tools are making coding more fun for Kent and why he compares it to a genie (13:41) Why Kent says languages don’t matter anymore (16:56) Kent’s current project building a small talk server (17:51) How Kent got involved with The Agile Manifesto (23:46) Gergely’s time at JP Morgan, and why Kent didn’t like the word ‘agile’ (26:25) An overview of “extreme programming” (XP) (35:41) Kent’s childhood tape-to-tape experiments that inspired TDD (42:11) Kent’s response to Ousterhout’s criticism of TDD (50:05) Why Kent still uses TDD with his AI stack (54:26) How Facebook operated in 2011 (1:04:10) Facebook in 2011 vs. 2017 (1:12:24) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Today, we’re diving deep with Brigadier General (Ret.) Blaine Holt, also known as “Blaino,” for a no-holds-barred conversation that unpacks the intersection of national security, technology, and the future of AI. Whether you’re an industry veteran, aspiring leader, or just curious about what’s next for AI, national security, and technology, you’ll find major insights—and plenty of food for thought—in this episode.
For more about us: https://linktr.ee/overwatchmissioncritical