talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

15

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Gergely Orosz ×

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out. •⁠ Linear ⁠ — ⁠ The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating! — Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com. In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need. We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged. — Timestamps (00:00) Intro (01:50) How Martin got into software engineering  (07:48) Joining Thoughtworks  (10:07) The Thoughtworks Technology Radar (16:45) From Assembly to high-level languages (25:08) Non-determinism  (33:38) Vibe coding (39:22) StackOverflow vs. coding with AI (43:25) Importance of testing with LLMs  (50:45) LLMs for enterprise software (56:38) Why Martin wrote Refactoring  (1:02:15) Why refactoring is so relevant today (1:06:10) Using LLMs with deterministic tools (1:07:36) Patterns of Enterprise Application Architecture (1:18:26) The Agile Manifesto  (1:28:35) How Martin learns about AI  (1:34:58) Advice for junior engineers  (1:37:44) The state of the tech industry today (1:42:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Vibe coding as a software engineer • The AI Engineering stack • AI Engineering in the real world • What changed in 50 years of computing — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here. •⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out. — Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications. I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is! We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works. — Timestamps (00:00) Intro (02:35) Compilers in the early 2000s (04:48) Why Chris built LLVM (08:24) GCC vs. LLVM (09:47) LLVM at Apple  (19:25) How Chris got support to go open source at Apple (20:28) The story of Swift  (24:32) The process for designing a language  (31:00) Learnings from launching Swift  (35:48) Swift Playgrounds: making coding accessible (40:23) What Swift solved and the technical debt it created (47:28) AI learnings from Google and Tesla  (51:23) SiFive: learning about hardware engineering (52:24) Mojo’s origin story (57:15) Modular’s bet on a two-level stack (1:01:49) Compiler shortcomings (1:09:11) Getting started with Mojo  (1:15:44) How big is Modular, as a company? (1:19:00) AI coding tools the Modular team uses  (1:22:59) What kind of software engineers Modular hires  (1:25:22) A programming language for LLMs? No thanks (1:29:06) Why you should study and understand compilers — The Pragmatic Engineer deepdives relevant for this episode: •⁠ AI Engineering in the real world • The AI Engineering stack • Uber's crazy YOLO app rewrite, from the front seat • Python, Go, Rust, TypeScript and AI with Armin Ronacher • Microsoft’s developer tools roots — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear – The system for modern product development. — Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development. In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works. — Timestamps (00:00) Intro (02:17) Vibe coding vs. AI-assisted engineering (06:07) How Addy uses AI tools (13:10) Addy’s learnings about applying AI for development (18:47) Addy’s favorite tools (22:15) The 70% Problem (28:15) Tactics for efficient LLM usage (32:58) How AI tools evolved (34:29) The case for keeping expectations low and control high (38:05) Autonomous agents and working with them (42:49) How the EM and PM role changes with AI (47:14) The rise of new roles and shifts in developer education (48:11) The importance of critical thinking when working with AI (54:08) LLMs as a tool for learning (1:03:50) Rapid questions — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Vibe Coding as a software engineer •⁠ How AI-assisted coding will change software engineering: hard truths •⁠ AI Engineering in the real world •⁠ The AI Engineering stack •⁠ How Claude Code is built — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Something interesting is happening with the latest generation of tech giants. Rather than building advanced experimentation tools themselves, companies like Anthropic, Figma, Notion and a bunch of others… are just using Statsig. Statsig has rebuilt this entire suite of data tools that was available at maybe 10 or 15 giants until now. Check out Statsig. •⁠ Linear – The system for modern product development. Linear is just so fast to use – and it enables velocity in product workflows. Companies like Perplexity and OpenAI have already switched over, because simplicity scales. Go ahead and check out Linear and see why it feels like a breeze to use. — What is it really like to be an engineer at Google? In this special deep dive episode, we unpack how engineering at Google actually works. We spent months researching the engineering culture of the search giant, and talked with 20+ current and former Googlers to bring you this deepdive with Elin Nilsson, tech industry researcher for The Pragmatic Engineer and a former Google intern. Google has always been an engineering-driven organization. We talk about its custom stack and tools, the design-doc culture, and the performance and promotion systems that define career growth. We also explore the culture that feels built for engineers: generous perks, a surprisingly light on-call setup often considered the best in the industry, and a deep focus on solving technical problems at scale. If you are thinking about applying to Google or are curious about how the company’s engineering culture has evolved, this episode takes a clear look at what it was like to work at Google in the past versus today, and who is a good fit for today’s Google. Jump to interesting parts: (13:50) Tech stack (1:05:08) Performance reviews (GRAD) (2:07:03) The culture of continuously rewriting things — Timestamps (00:00) Intro (01:44) Stats about Google (11:41) The shared culture across Google (13:50) Tech stack (34:33) Internal developer tools and monorepo (43:17) The downsides of having so many internal tools at Google (45:29) Perks (55:37) Engineering roles (1:02:32) Levels at Google  (1:05:08) Performance reviews (GRAD) (1:13:05) Readability (1:16:18) Promotions (1:25:46) Design docs (1:32:30) OKRs (1:44:43) Googlers, Nooglers, ReGooglers (1:57:27) Google Cloud (2:03:49) Internal transfers (2:07:03) Rewrites (2:10:19) Open source (2:14:57) Culture shift (2:31:10) Making the most of Google, as an engineer (2:39:25) Landing a job at Google — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Inside Google’s engineering culture •⁠ Oncall at Google •⁠ Performance calibrations at tech companies •⁠ Promotions and tooling at Google •⁠ How Kubernetes is built •⁠ The man behind the Big Tech comics: Google cartoonist Manu Cornet — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig. •⁠ Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself. — What does it take to do well at a hyper-growth company? In this episode of The Pragmatic Engineer, I sit down with Charles-Axel Dein, one of the first engineers at Uber, who later hired me there. Since then, he’s gone on to work at CloudKitchens. He’s also been maintaining the popular Professional programming reading list GitHub repo for 15 years, where he collects articles that made him a better programmer.  In our conversation, we dig into what it’s really like to work inside companies that grow rapidly in scale and headcount. Charles shares what he’s learned about personal productivity, project management, incidents, interviewing, plus how to build flexible skills that hold up in fast-moving environments.  Jump to interesting parts: • 10:41 – the reality of working inside a hyperscale company • 41:10 – the traits of high-performing engineers • 1:03:31 – Charles’ advice for getting hired in today’s job market We also discuss: • How to spot the signs of hypergrowth (and when it’s slowing down) • What sets high-performing engineers apart beyond shipping • Charles’s personal productivity tips, favorite reads, and how he uses reading to uplevel his skills • Strategic tips for building your resume and interviewing  • How imposter syndrome is normal, and how leaning into it helps you grow • And much more! If you’re at a fast-growing company, considering joining one, or looking to land your next role, you won’t want to miss this practical advice on hiring, interviewing, productivity, leadership, and career growth. — Timestamps (00:00) Intro (04:04) Early days at Uber as engineer #20 (08:12) CloudKitchens’ similarities with Uber (10:41) The reality of working at a hyperscale company (19:05) Tenancies and how Uber deployed new features (22:14) How CloudKitchens handles incidents (26:57) Hiring during fast-growth (34:09) Avoiding burnout (38:55) The popular Professional programming reading list repo (41:10) The traits of high-performing engineers  (53:22) Project management tactics (1:03:31) How to get hired as a software engineer (1:12:26) How AI is changing hiring (1:19:26) Unexpected ways to thrive in fast-paced environments (1:20:45) Dealing with imposter syndrome  (1:22:48) Book recommendations  (1:27:26) The problem with survival bias  (1:32:44) AI’s impact on software development  (1:42:28) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Software engineers leading projects •⁠ The Platform and Program split at Uber •⁠ Inside Uber’s move to the Cloud •⁠ How Uber built its observability platform •⁠ From Software Engineer to AI Engineer – with Janvi Kalra — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig built a complete set of data tools that allow engineering teams to measure the impact of their work. This toolkit is SO valuable to so many teams, that OpenAI - who was a huge user of Statsig - decided to acquire the company, the news announced last week. Talk about validation! Check out Statsig. •⁠ Linear – The system for modern product development. Here’s an interesting story: OpenAI switched to Linear as a way to establish a shared vocabulary between teams. Every project now follows the same lifecycle, uses the same labels, and moves through the same states. Try Linear for yourself. — The Pragmatic Engineer Podcast is back with the Fall 2025 season. Expect new episodes to be published on most Wednesdays, looking ahead. Code Complete is one of the most enduring books on software engineering. Steve McConnell wrote the 900-page handbook just five years into his career, capturing what he wished he’d known when starting out. Decades later, the lessons remain relevant, and Code Complete remains a best-seller. In this episode, we talk about what has aged well, what needed updating in the second edition, and the broader career principles Steve has developed along the way. From his “career pyramid” model to his critique of “lily pad hopping,” and why periods of working in fast-paced, all-in environments can be so rewarding, the emphasis throughout is on taking ownership of your career and making deliberate choices. We also discuss: • Top-down vs. bottom-up design and why most engineers default to one approach • Why rewriting code multiple times makes it better • How taking a year off to write Code Complete crystallized key lessons • The 3 areas software designers need to understand, and why focusing only on technology may be the most limiting  • And much more! Steve rarely gives interviews, so I hope you enjoy this conversation, which we recorded in Seattle. — Timestamps (00:00) Intro (01:31) How and why Steve wrote Code Complete (08:08) What code construction is and how it differs from software development (11:12) Top-down vs. bottom-up design approach (14:46) Why design documents frustrate some engineers (16:50) The case for rewriting everything three times (20:15) Steve’s career before and after Code Complete (27:47) Steve’s career advice (44:38) Three areas software designers need to understand (48:07) Advice when becoming a manager, as a developer (53:02) The importance of managing your energy (57:07) Early Microsoft and why startups are a culture of intense focus (1:04:14) What changed in the second edition of Code Complete  (1:10:50) AI’s impact on software development: Steve’s take (1:17:45) Code reviews and GenAI (1:19:58) Why engineers are becoming more full-stack  (1:21:40) Could AI be the exception to “no silver bullets?” (1:26:31) Steve’s advice for engineers on building a meaningful career — The Pragmatic Engineer deepdives relevant for this episode: • What changed in 50 years of computing • The past and future of modern backend practices • The Philosophy of Software Design – with John Ousterhout • AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django)  • TDD, AI agents and coding – with Kent Beck — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Sonar — Code quality and code security for ALL code. — Steve Yegge⁠ is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond. In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built. We also discuss:  • The “interview anti-loop” at Google and the problems with interviews • An inside look at how Amazon operated in the early days before microservices   • What Steve liked about working at Grab • Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms • Why Steve came out of retirement • The emerging role of the “AI Fixer” in engineering teams • How AI-assisted coding is deceptively simple, but extremely difficult to steer • Steve’s advice for using AI coding tools and overcoming common challenges • Predictions about the future of developer productivity • A case for AI creating a real meritocracy  • And much more! — Timestamps (00:00) Intro (04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews (07:44) Work trials and why entry-level jobs aren’t posted for big tech companies (09:50) An overview of the difficult process of landing a job as a software engineer (15:48) Steve’s thoughts on Grab and why he loved it (20:22) Insights from the Google platforms rant that was picked up by TechCrunch (27:44) The impact of the Google platforms rant (29:40) What Steve discovered about print ads not working for Google  (31:48) What went wrong with Google+ and Wave (35:04) How Amazon has changed and what Google is doing wrong (42:50) Why Steve came out of retirement  (45:16) Insights from “the death of the junior developer” and the impact of AI (53:20) The new role Steve predicts will emerge  (54:52) Changing business cycles (56:08) Steve’s new book about vibe coding and Gergely’s experience  (59:24) Reasons people struggle with AI tools (1:02:36) What will developer productivity look like in the future (1:05:10) The cost of using coding agents  (1:07:08) Steve’s advice for vibe coding (1:09:42) How Steve used AI tools to work on his game Wyvern  (1:15:00) Why Steve thinks there will actually be more jobs for developers  (1:18:29) A comparison between game engines and AI tools (1:21:13) Why you need to learn AI now (1:30:08) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ The full circle of developer productivity with Steve Yegge •⁠ Inside Amazon’s engineering culture •⁠ Vibe coding as a software engineer •⁠ AI engineering in the real world •⁠ The AI Engineering stack •⁠ Inside Sourcegraph’s engineering culture— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Sonar —  Code quality and code security for ALL code.  — What happens when a company goes all in on AI? At Shopify, engineers are expected to utilize AI tools, and they’ve been doing so for longer than most. Thanks to early access to models from GitHub Copilot, OpenAI, and Anthropic, the company has had a head start in figuring out what works. In this live episode from LDX3 in London, I spoke with Farhan Thawar, VP of Engineering, about how Shopify is building with AI across the entire stack. We cover the company’s internal LLM proxy, its policy of unlimited token usage, and how interns help push the boundaries of what’s possible. In this episode, we cover: • How Shopify works closely with AI labs • The story behind Shopify’s recent Code Red • How non-engineering teams are using Cursor for vibecoding • Tobi Lütke’s viral memo and Shopify’s expectations around AI • A look inside Shopify’s LLM proxy—used for privacy, token tracking, and more • Why Shopify places no limit on AI token spending  • Why AI-first isn’t about reducing headcount—and why Shopify is hiring 1,000 interns • How Shopify’s engineering department operates and what’s changed since adopting AI tooling • Farhan’s advice for integrating AI into your workflow • And much more! — Timestamps (00:00) Intro (02:07) Shopify’s philosophy: “hire smart people and pair with them on problems” (06:22) How Shopify works with top AI labs  (08:50) The recent Code Red at Shopify (10:47) How Shopify became early users of GitHub Copilot and their pivot to trying multiple tools (12:49) The surprising ways non-engineering teams at Shopify are using Cursor (14:53) Why you have to understand code to submit a PR at Shopify (16:42) AI tools' impact on SaaS  (19:50) Tobi Lütke’s AI memo (21:46) Shopify’s LLM proxy and how they protect their privacy (23:00) How Shopify utilizes MCPs (26:59) Why AI tools aren’t the place to pinch pennies (30:02) Farhan’s projects and favorite AI tools (32:50) Why AI-first isn’t about freezing headcount and the value of hiring interns (36:20) How Shopify’s engineering department operates, including internal tools (40:31) Why Shopify added coding interviews for director-level and above hires (43:40) What has changed since Spotify added AI tooling  (44:40) Farhan’s advice for implementing AI tools — The Pragmatic Engineer deepdives relevant for this episode: • How Shopify built its Live Globe for Black Friday • Inside Shopify's leveling split • Real-world engineering challenges: building Cursor • How Anthropic built Artifacts — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Sinch⁠ — Connect with customers at every step of their journey. •⁠ Cortex⁠ — Your Portal to Engineering Excellence. — What does it take to land a job as an AI Engineer—and thrive in the role? In this episode of Pragmatic Engineer, I’m joined by Janvi Kalra, currently an AI Engineer at OpenAI. Janvi shares how she broke into tech with internships at top companies, landed a full-time software engineering role at Coda, and later taught herself the skills to move into AI Engineering: by things like building projects in her free time, joining hackathons, and ultimately proving herself and earning a spot on Coda’s first AI Engineering team. In our conversation, we dive into the world of AI Engineering and discuss three types of AI companies, how to assess them based on profitability and growth, and practical advice for landing your dream job in the field. We also discuss the following:  • How Janvi landed internships at Google and Microsoft, and her tips for interview prepping • A framework for evaluating AI startups • An overview of what an AI Engineer does • A mini curriculum for self-learning AI: practical tools that worked for Janvi • The Coda project that impressed CEO Shishir Mehrotra and sparked Coda Brain • Janvi’s role at OpenAI and how the safety team shapes responsible AI • How OpenAI blends startup speed with big tech scale • Why AI Engineers must be ready to scrap their work and start over • Why today’s engineers need to be product-minded, design-aware, full-stack, and focused on driving business outcomes • And much more! — Timestamps (00:00) Intro (02:31) How Janvi got her internships at Google and Microsoft (03:35) How Janvi prepared for her coding interviews  (07:11) Janvi’s experience interning at Google (08:59) What Janvi worked on at Microsoft  (11:35) Why Janvi chose to work for a startup after college (15:00) How Janvi picked Coda  (16:58) Janvi’s criteria for picking a startup now  (18:20) How Janvi evaluates ‘customer obsession’  (19:12) Fast—an example of the downside of not doing due diligence (21:38) How Janvi made the jump to Coda’s AI team (25:48) What an AI Engineer does  (27:30) How Janvi developed her AI Engineering skills through hackathons (30:34) Janvi’s favorite AI project at Coda: Workspace Q&A  (37:40) Learnings from interviewing at 46 companies (40:44) Why Janvi decided to get experience working for a model company  (43:17) Questions Janvi asks to determine growth and profitability (45:28) How Janvi got an offer at OpenAI, and an overview of the interview process (49:08) What Janvi does at OpenAI  (51:01) What makes OpenAI unique  (52:30) The shipping process at OpenAI (55:41) Surprising learnings from AI Engineering  (57:50) How AI might impact new graduates  (1:02:19) The impact of AI tools on coding—what is changing, and what remains the same (1:07:51) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ AI Engineering in the real world •⁠ The AI Engineering stack •⁠ Building, launching, and scaling ChatGPT Images — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ Modal⁠ — The cloud platform for building AI applications •⁠ CodeRabbit⁠⁠ — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free. — What happens when LLMs meet real-world codebases? In this episode of The Pragmatic Engineer,  I am joined by Varun Mohan, CEO and Co-Founder of Windsurf. Varun talks me through the technical challenges of building an AI-native IDE (Windsurf) —and how these tools are changing the way software gets built.  We discuss:  • What building self-driving cars taught the Windsurf team about evaluating LLMs • How LLMs for text are missing capabilities for coding like “fill in the middle” • How Windsurf optimizes for latency • Windsurf’s culture of taking bets and learning from failure • Breakthroughs that led to Cascade (agentic capabilities) • Why the Windsurf teams build their LLMs • How non-dev employees at Windsurf build custom SaaS apps – with Windsurf! • How Windsurf empowers engineers to focus on more interesting problems • The skills that will remain valuable as AI takes over more of the codebase • And much more! — Timestamps (00:00) Intro (01:37) How Windsurf tests new models (08:25) Windsurf’s origin story  (13:03) The current size and scope of Windsurf (16:04) The missing capabilities Windsurf uncovered in LLMs when used for coding (20:40) Windsurf’s work with fine-tuning inside companies  (24:00) Challenges developers face with Windsurf and similar tools as codebases scale (27:06) Windsurf’s stack and an explanation of FedRAMP compliance (29:22) How Windsurf protects latency and the problems with local data that remain unsolved (33:40) Windsurf’s processes for indexing code  (37:50) How Windsurf manages data  (40:00) The pros and cons of embedding databases  (42:15) “The split brain situation”—how Windsurf balances present and long-term  (44:10) Why Windsurf embraces failure and the learnings that come from it (46:30) Breakthroughs that fueled Cascade (48:43) The insider’s developer mode that allows Windsurf to dogfood easily  (50:00) Windsurf’s non-developer power user who routinely builds apps in Windsurf (52:40) Which SaaS products won’t likely be replaced (56:20) How engineering processes have changed at Windsurf  (1:00:01) The fatigue that goes along with being a software engineer, and how AI tools can help (1:02:58) Why Windsurf chose to fork VS Code and built a plugin for JetBrains  (1:07:15) Windsurf’s language server  (1:08:30) The current use of MCP and its shortcomings  (1:12:50) How coding used to work in C#, and how MCP may evolve  (1:14:05) Varun’s thoughts on vibe coding and the problems non-developers encounter (1:19:10) The types of engineers who will remain in demand  (1:21:10) How AI will impact the future of software development jobs and the software industry (1:24:52) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • IDEs with GenAI features that Software Engineers love • AI tooling for Software Engineers in 2024: reality check • How AI-assisted coding will change software engineering: hard truths • AI tools for software engineers, but without the hype — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ CodeRabbit⁠⁠ — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free. •⁠ Modal⁠ — The cloud platform for building AI applications. — How will AI tools change software engineering? Tools like Cursor, Windsurf and Copilot are getting better at autocomplete, generating tests and documentation. But what is changing, when it comes to software design? Stanford professor John Ousterhout thinks not much. In fact, he believes that great software design is becoming even more important as AI tools become more capable in generating code.  In this episode of The Pragmatic Engineer, John joins me to talk about why design still matters and how most teams struggle to get it right. We dive into his book A Philosophy of Software Design, unpack the difference between top-down and bottom-up approaches, and explore why some popular advice, like writing short methods or relying heavily on TDD, does not hold up, according to John. We also explore:  • The differences between working in industry vs. academia  • Why John believes software design will become more important as AI capabilities expand • The top-down and bottoms-up design approaches – and why you should use both • John’s “design it twice” principle • Why deep modules are essential for good software design  • Best practices for special cases and exceptions • The undervalued trait of empathy in design thinking • Why John advocates for doing some design upfront • John’s criticisms of the single-responsibility principle, TDD, and why he’s a fan of well-written comments  • And much more! As a fun fact: when we recorded this podcast, John was busy contributing to the Linux kernel: adding support to the Homa Transport Protocol – a protocol invented by one of his PhD students. John wanted to make this protocol available more widely, and is putting in the work to do so. What a legend! (We previously covered how Linux is built and how to contribute to the Linux kernel) — Timestamps (00:00) Intro  (02:00) Why John transitioned back to academia (03:47) Working in academia vs. industry  (07:20) Tactical tornadoes vs. 10x engineers (11:59) Long-term impact of AI-assisted coding (14:24) An overview of software design (15:28) Why TDD and Design Patterns are less popular now  (17:04) Two general approaches to designing software  (18:56) Two ways to deal with complexity  (19:56) A case for not going with your first idea  (23:24) How Uber used design docs (26:44) Deep modules vs. shallow modules (28:25) Best practices for error handling (33:31) The role of empathy in the design process (36:15) How John uses design reviews  (38:10) The value of in-person planning and using old-school whiteboards  (39:50) Leading a planning argument session and the places it works best (42:20) The value of doing some design upfront  (46:12) Why John wrote A Philosophy of Software of Design  (48:40) An overview of John’s class at Stanford (52:20) A tough learning from early in Gergely’s career  (55:48) Why John disagrees with Robert Martin on short methods (1:10:40) John’s current coding project in the Linux Kernel  (1:14:13) Updates to A Philosophy of Software Design in the second edition (1:19:12) Rapid fire round (1:01:08) John’s criticisms of TDD and what he favors instead  (1:05:30) Why John supports the use of comments and how to use them correctly (1:09:20) How John uses ChatGPT to help explain code in the Linux Kernel — The Pragmatic Engineer deepdives relevant for this episode: • Engineering Planning with RFCs, Design Documents and ADRs • Paying down tech debt • Software architect archetypes • Building Bluesky: a distributed social network — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • Vanta — Automate compliance and simplify security with Vanta. — Linux is the most widespread operating system, globally – but how is it built? Few people are better to answer this than Greg Kroah-Hartman: a Linux kernel maintainer for 25 years, and one of the 3 Linux Kernel Foundation Fellows (the other two are Linus Torvalds and Shuah Khan). Greg manages the Linux kernel’s stable releases, and is a maintainer of multiple kernel subsystems. We cover the inner workings of Linux kernel development, exploring everything from how changes get implemented to why its community-driven approach produces such reliable software. Greg shares insights about the kernel's unique trust model and makes a case for why engineers should contribute to open-source projects. We go into: • How widespread is Linux? • What is the Linux kernel responsible for – and why is it a monolith? • How does a kernel change get merged? A walkthrough • The 9-week development cycle for the Linux kernel • Testing the Linux kernel • Why is Linux so widespread? • The career benefits of open-source contribution • And much more! — Timestamps (00:00) Intro (02:23) How widespread is Linux? (06:00) The difference in complexity in different devices powered by Linux  (09:20) What is the Linux kernel? (14:00) Why trust is so important with the Linux kernel development (16:02) A walk-through of a kernel change (23:20) How Linux kernel development cycles work (29:55) The testing process at Kernel and Kernel CI  (31:55) A case for the open source development process (35:44) Linux kernel branches: Stable vs. development (38:32) Challenges of maintaining older Linux code  (40:30) How Linux handles bug fixes (44:40) The range of work Linux kernel engineers do  (48:33) Greg’s review process and its parallels with Uber’s RFC process (51:48) Linux kernel within companies like IBM (53:52) Why Linux is so widespread  (56:50) How Linux Kernel Institute runs without product managers  (1:02:01) The pros and cons of using Rust in Linux kernel  (1:09:55) How LLMs are utilized in bug fixes and coding in Linux  (1:12:13) The value of contributing to the Linux kernel or any open-source project  (1:16:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: What TPMs do and what software engineers can learn from them The past and future of modern backend practices Backstage: an open-source developer portal — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Graphite — The AI developer productivity platform.  • Vanta — Automate compliance and simplify security with Vanta. — On today’s episode of The Pragmatic Engineer, I’m joined by Chip Huyen, a computer scientist, author of the freshly published O’Reilly book AI Engineering, and an expert in applied machine learning. Chip has worked as a researcher at Netflix, was a core developer at NVIDIA (building NeMo, NVIDIA’s GenAI framework), and co-founded Claypot AI. She also taught Machine Learning at Stanford University. In this conversation, we dive into the evolving field of AI Engineering and explore key insights from Chip’s book, including: • How AI Engineering differs from Machine Learning Engineering  • Why fine-tuning is usually not a tactic you’ll want (or need) to use • The spectrum of solutions to customer support problems – some not even involving AI! • The challenges of LLM evals (evaluations) • Why project-based learning is valuable—but even better when paired with structured learning • Exciting potential use cases for AI in education and entertainment • And more! — Timestamps (00:00) Intro  (01:31) A quick overview of AI Engineering (05:00) How Chip ensured her book stays current amidst the rapid advancements in AI (09:50) A definition of AI Engineering and how it differs from Machine Learning Engineering  (16:30) Simple first steps in building AI applications (22:53) An explanation of BM25 (retrieval system)  (23:43) The problems associated with fine-tuning  (27:55) Simple customer support solutions for rolling out AI thoughtfully  (33:44) Chip’s thoughts on staying focused on the problem  (35:19) The challenge in evaluating AI systems (38:18) Use cases in evaluating AI  (41:24) The importance of prioritizing users’ needs and experience  (46:24) Common mistakes made with Gen AI (52:12) A case for systematic problem solving  (53:13) Project-based learning vs. structured learning (58:32) Why AI is not the end of engineering (1:03:11) How AI is helping education and the future use cases we might see (1:07:13) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Applied AI Software Engineering: RAG https://newsletter.pragmaticengineer.com/p/rag  • How do AI software engineering agents work? https://newsletter.pragmaticengineer.com/p/ai-coding-agents  • AI Tooling for Software Engineers in 2024: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024  • IDEs with GenAI features that Software Engineers love https://newsletter.pragmaticengineer.com/p/ide-that-software-engineers-love — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners • Formation — Level up your career and compensation with Formation.  • WorkOS — The modern identity platform for B2B SaaS • Vanta — Automate compliance and simplify security with Vanta. — In today’s episode of The Pragmatic Engineer, I’m joined by Jonas Tyroller, one of the developers behind Thronefall, a minimalist indie strategy game that blends tower defense and kingdom-building, now available on Steam. Jonas takes us through the journey of creating Thronefall from start to finish, offering insights into the world of indie game development. We explore: • Why indie developers often skip traditional testing and how they find bugs • The developer workflow using Unity, C# and Blender • The two types of prototypes game developers build  • Why Jonas spent months building game prototypes in 1-2 days • How Jonas uses ChatGPT to build games • Jonas’s tips on making games that sell • And more! — Timestamps (00:00) Intro (02:07) Building in Unity (04:05) What the shader tool is used for  (08:44) How a Unity build is structured (11:01) How game developers write and debug code  (16:21) Jonas’s Unity workflow (18:13) Importing assets from Blender (21:06) The size of Thronefall and how it can be so small (24:04) Jonas’s thoughts on code review (26:42) Why practices like code review and source control might not be relevant for all contexts (30:40) How Jonas and Paul ensure the game is fun  (32:25) How Jonas and Paul used beta testing feedback to improve their game (35:14) The mini-games in Thronefall and why they are so difficult (38:14) The struggle to find the right level of difficulty for the game (41:43) Porting to Nintendo Switch (45:11) The prototypes Jonas and Paul made to get to Thronefall (46:59) The challenge of finding something you want to build that will sell (47:20) Jonas’s ideation process and how they figure out what to build  (49:35) How Thronefall evolved from a mini-game prototype (51:50) How long you spend on prototyping  (52:30) A lesson in failing fast (53:50) The gameplay prototype vs. the art prototype (55:53) How Jonas and Paul distribute work  (57:35) Next steps after having the play prototype and art prototype (59:36) How a launch on Steam works  (1:01:18) Why pathfinding was the most challenging part of building Thronefall (1:08:40) Gen AI tools for building indie games  (1:09:50) How Jonas uses ChatGPT for editing code and as a translator  (1:13:25) The pros and cons of being an indie developer  (1:15:32) Jonas’s advice for software engineers looking to get into indie game development (1:19:32) What to look for in a game design school (1:22:46) How luck figures into success and Jonas’s tips for building a game that sells (1:26:32) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Game development basics https://newsletter.pragmaticengineer.com/p/game-development-basics  • Building a simple game using Unity https://newsletter.pragmaticengineer.com/p/building-a-simple-game — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Brought to you by: • WorkOS — The modern identity platform for B2B SaaS. • Sevalla — Deploy anything from preview environments to Docker images. • Chronosphere — The observability platform built for control. — Welcome to The Pragmatic Engineer! Today, I’m thrilled to be joined by Grady Booch, a true legend in software development. Grady is the Chief Scientist for Software Engineering at IBM, where he leads groundbreaking research in embodied cognition. He’s the mind behind several object-oriented design concepts, a co-author of the Unified Modeling Language, and a founding member of the Agile Alliance and the Hillside Group. Grady has authored six books, hundreds of articles, and holds prestigious titles as an IBM, ACM, and IEEE Fellow, as well as a recipient of the Lovelace Medal (an award for those with outstanding contributions to the advancement of computing). In this episode, we discuss: • What it means to be an IBM Fellow • The evolution of the field of software development • How UML was created, what its goals were, and why Grady disagrees with the direction of later versions of UML • Pivotal moments in software development history • How the software architect role changed over the last 50 years • Why Grady declined to be the Chief Architect of Microsoft – saying no to Bill Gates! • Grady’s take on large language models (LLMs) • Advice to less experienced software engineers • … and much more! — Timestamps (00:00) Intro (01:56) What it means to be a Fellow at IBM (03:27) Grady’s work with legacy systems (09:25) Some examples of domains Grady has contributed to (11:27) The evolution of the field of software development (16:23) An overview of the Booch method (20:00) Software development prior to the Booch method (22:40) Forming Rational Machines with Paul and Mike (25:35) Grady’s work with Bjarne Stroustrup (26:41) ROSE and working with the commercial sector (30:19) How Grady built UML with Ibar Jacobson and James Rumbaugh (36:08) An explanation of UML and why it was a mistake to turn it into a programming language (40:25) The IBM acquisition and why Grady declined Bill Gates’s job offer  (43:38) Why UML is no longer used in industry  (52:04) Grady’s thoughts on formal methods (53:33) How the software architect role changed over time (1:01:46) Disruptive changes and major leaps in software development (1:07:26) Grady’s early work in AI (1:12:47) Grady’s work with Johnson Space Center (1:16:41) Grady’s thoughts on LLMs  (1:19:47) Why Grady thinks we are a long way off from sentient AI  (1:25:18) Grady’s advice to less experienced software engineers (1:27:20) What’s next for Grady (1:29:39) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The Past and Future of Modern Backend Practices https://newsletter.pragmaticengineer.com/p/the-past-and-future-of-backend-practices  • What Changed in 50 Years of Computing https://newsletter.pragmaticengineer.com/p/what-changed-in-50-years-of-computing  • AI Tooling for Software Engineers: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024 — Where to find Grady Booch: • X: https://x.com/grady_booch • LinkedIn: https://www.linkedin.com/in/gradybooch • Website: https://computingthehumanexperience.com Where to find Gergely: • Newsletter: https://www.pragmaticengineer.com/ • YouTube: https://www.youtube.com/c/mrgergelyorosz • LinkedIn: https://www.linkedin.com/in/gergelyorosz/ • X: https://x.com/GergelyOrosz — References and Transcripts: See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe