talk-data.com talk-data.com

Topic

SaaS

Software as a Service (SaaS)

cloud_computing software_delivery subscription

232

tagged

Activity Trend

23 peak/qtr
2020-Q1 2026-Q1

Activities

232 activities · Newest first

Today, we’re turning the tables and interviewing our host, Arman Eshraghi, Founding CEO at Qrvey, the only embedded analytics solution purpose-built for SaaS. Arman tells us about:

What inspired him to start the SaaS Scaled podcastHow the vision of the podcast has changed since its inception in 2021How the fundamental objective remains: unscripted discussions in which experts share their knowledgeGetting comfortable and having sincere, authentic, organic discussionsWhat makes SaaS Scaled stand out among other podcasts

In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions. 

Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps.

Highlights/ Skip to:

How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03) Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58)  FOMO and the “Solution in Search of a Problem” problem in Data (5:18)  Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21)  Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09) How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57) Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54)

Quotes from Today’s Episode "I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do." –Kate O’Neill

" I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously." –Kate O’Neill

"The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it." –Kate O’Neill

“We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.” 

–Kate O’Neill

"My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders.  There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.”

–Brian T. O’Neill

Links

KO Insights: https://www.koinsights.com/ LinkedIn for Kate O’Neill: https://www.linkedin.com/in/kateoneill/ Kate O’Neill Book: What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast

Today, we’re joined by  Chris Silvestri, Founder at Conversion Alchemy, an agency that combines copywriting, UX design, and psychology to help SaaS and eCommerce companies convert more visitors into customers. We talk about:  How failure to crystallize strategy results in messaging shortcomings & low conversionsTactics to get started with & accelerate messaging content, including use of AIImpacts of improving messaging to differentiate your SaaS offeringGrowth stages at which it’s most impactful to fine-tune messagingUse of AI models to act as prospects in order to gain insights, including use of real research to construct partially synthetic personas

Brought to You By: •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Sonar —  Code quality and code security for ALL code. — In this episode of The Pragmatic Engineer, I sit down with Peter Walker, Head of Insights at Carta, to break down how venture capital and startups themselves are changing. We go deep on the numbers: why fewer companies are getting funded despite record VC investment levels, how hiring has shifted dramatically since 2021, and why solo founders are on the rise even though most VCs still prefer teams. We also unpack the growing emphasis on ARR per FTE, what actually happens in bridge and down rounds, and why the time between fundraising rounds has stretched far beyond the old 18-month cycle. We cover what all this means for engineers: what to ask before joining a startup, how to interpret valuation trends, and what kind of advisor roles startups are actually looking for. If you work at a startup, are considering joining one, or just want a clearer picture of how venture-backed companies operate today, this episode is for you. — Timestamps (00:00) Intro (01:21) How venture capital works and the goal of VC-backed startups (03:10) Venture vs. non-venture backed businesses  (05:59) Why venture-backed companies prioritize growth over profitability (09:46) A look at the current health of venture capital  (13:19) The hiring slowdown at startups (16:00) ARR per FTE: The new metric VCs care about (21:50) Priced seed rounds vs. SAFEs  (24:48) Why some founders are incentivized to raise at high valuations (29:31) What a bridge round is and why they can signal trouble (33:15) Down rounds and how optics can make or break startups  (36:47) Why working at startups offers more ownership and learning (37:47) What the data shows about raising money in the summer (41:45) The length of time it takes to close a VC deal (44:29) How AI is reshaping startup formation, team size, and funding trends (48:11) Why VCs don’t like solo founders (50:06) How employee equity (ESOPs) work (53:50) Why acquisition payouts are often smaller than employees expect (55:06) Deep tech vs. software startups: (57:25) Startup advisors: What they do, how much equity they get (1:02:08) Why time between rounds is increasing and what that means (1:03:57) Why it’s getting harder to get from Seed to Series A  (1:06:47) A case for quitting (sometimes)  (1:11:40) How to evaluate a startup before joining as an engineer (1:13:22) The skills engineers need to thrive in a startup environment (1:16:04) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode:

— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, we’re joined by Erik Huddleston, Chief Executive Officer of Aprimo, the #1 digital asset management and content operations platform.  We talk about: Automating content creation, plus scaling upstream & downstream processes with brand safety agentsFramework for CEOs to think through how to best apply AI more generallyThe importance of role clarity: understanding the core activities that impact the financial planHow SaaS vendors can survive tech consolidation by being strategically relevant to the budget ownerThe importance of a good personal knowledge management system

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Sonar — Code quality and code security for ALL code. — Steve Yegge⁠ is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond. In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built. We also discuss:  • The “interview anti-loop” at Google and the problems with interviews • An inside look at how Amazon operated in the early days before microservices   • What Steve liked about working at Grab • Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms • Why Steve came out of retirement • The emerging role of the “AI Fixer” in engineering teams • How AI-assisted coding is deceptively simple, but extremely difficult to steer • Steve’s advice for using AI coding tools and overcoming common challenges • Predictions about the future of developer productivity • A case for AI creating a real meritocracy  • And much more! — Timestamps (00:00) Intro (04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews (07:44) Work trials and why entry-level jobs aren’t posted for big tech companies (09:50) An overview of the difficult process of landing a job as a software engineer (15:48) Steve’s thoughts on Grab and why he loved it (20:22) Insights from the Google platforms rant that was picked up by TechCrunch (27:44) The impact of the Google platforms rant (29:40) What Steve discovered about print ads not working for Google  (31:48) What went wrong with Google+ and Wave (35:04) How Amazon has changed and what Google is doing wrong (42:50) Why Steve came out of retirement  (45:16) Insights from “the death of the junior developer” and the impact of AI (53:20) The new role Steve predicts will emerge  (54:52) Changing business cycles (56:08) Steve’s new book about vibe coding and Gergely’s experience  (59:24) Reasons people struggle with AI tools (1:02:36) What will developer productivity look like in the future (1:05:10) The cost of using coding agents  (1:07:08) Steve’s advice for vibe coding (1:09:42) How Steve used AI tools to work on his game Wyvern  (1:15:00) Why Steve thinks there will actually be more jobs for developers  (1:18:29) A comparison between game engines and AI tools (1:21:13) Why you need to learn AI now (1:30:08) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ The full circle of developer productivity with Steve Yegge •⁠ Inside Amazon’s engineering culture •⁠ Vibe coding as a software engineer •⁠ AI engineering in the real world •⁠ The AI Engineering stack •⁠ Inside Sourcegraph’s engineering culture— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent. 

Highlights/ Skip to:

How Todd has addressed analytics apathy over the past decade at Pendo (1:17) Getting back to basics and not barraging people with more data and power (4:02) Pendo’s strategy for keeping the product experience simple without abandoning power users (6:44) Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51) What Pendo looks for when hiring product managers right now, and why (14:58) How Pendo evaluates AI product managers, specifically (19:14) How Todd Olson views AI product management compared to traditional software product management (21:56) Todd’s concerns about the probabilistic nature of AI-generated answers in the product UX (27:51) What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)   Why being able to tell what answers are best will become more important as choice increases (40:05)

Quotes from Today’s Episode

“Let’s go back to classic Geoffrey Moore Crossing the Chasm, you’re selling to early adopters. And what you’re doing is you’re relying on the early adopters’ skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn’t do anything because the market we were selling to was very, very savvy; they’re hungry people, they just like new things. They’re getting data, they’re feeling really, really smart, everything’s working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they’re not early adopters, they’re more technology laggards in some degree, and they don’t understand how to use data to inform their job. They’ve never used data to inform their job. There, we’ve had to do a lot more work.” Todd (2:04 - 2:58) “I think AI is amazing, and I don’t want to say AI is overhyped because AI in general is—yeah, it’s the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don’t. That’s not what I’m seeing. If you have a really curious product manager who’s going all in, I think you’re going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we’re seeing is a blend between design and product, that they’re always adjacent and connected; there’s more sort of overlappiness now.” Todd (22:41 - 23:28) “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it’s a fast check-in, like, a five-minute check-in, a ten-minute check-in, that’s pretty darn sticky. That’s a good metric. So, I like stickiness as a metric because it’s measuring [things like], “Are you thinking about this product a lot?” And if you’re thinking about it a lot, and like, you can’t kind of live without it, you’re going to go to it a lot, even if it’s only a few minutes a day. Social media is like that. Thankfully I’m not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That’s a pretty good metric. It gets part of my process of any products that you’re checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that’s what’s ultimately going to win here.” Todd (39:57)

Links

LinkedIn: https://www.linkedin.com/in/toddaolson/  X: https://x.com/tolson  [email protected] 

Today, we’re joined Marne Martin, the CEO of Emburse whose innovative travel and expense solutions power forward-thinking organizations. We talk about:  Building fast-moving & scalable businesses that can lastHow to finance and grow profitable companies to reach an exitThe challenges of finding a competitive edge as GenAI accelerates innovationTesting monetizing AI alongside conventional SaaS monetization

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Sonar —  Code quality and code security for ALL code.  — What happens when a company goes all in on AI? At Shopify, engineers are expected to utilize AI tools, and they’ve been doing so for longer than most. Thanks to early access to models from GitHub Copilot, OpenAI, and Anthropic, the company has had a head start in figuring out what works. In this live episode from LDX3 in London, I spoke with Farhan Thawar, VP of Engineering, about how Shopify is building with AI across the entire stack. We cover the company’s internal LLM proxy, its policy of unlimited token usage, and how interns help push the boundaries of what’s possible. In this episode, we cover: • How Shopify works closely with AI labs • The story behind Shopify’s recent Code Red • How non-engineering teams are using Cursor for vibecoding • Tobi Lütke’s viral memo and Shopify’s expectations around AI • A look inside Shopify’s LLM proxy—used for privacy, token tracking, and more • Why Shopify places no limit on AI token spending  • Why AI-first isn’t about reducing headcount—and why Shopify is hiring 1,000 interns • How Shopify’s engineering department operates and what’s changed since adopting AI tooling • Farhan’s advice for integrating AI into your workflow • And much more! — Timestamps (00:00) Intro (02:07) Shopify’s philosophy: “hire smart people and pair with them on problems” (06:22) How Shopify works with top AI labs  (08:50) The recent Code Red at Shopify (10:47) How Shopify became early users of GitHub Copilot and their pivot to trying multiple tools (12:49) The surprising ways non-engineering teams at Shopify are using Cursor (14:53) Why you have to understand code to submit a PR at Shopify (16:42) AI tools' impact on SaaS  (19:50) Tobi Lütke’s AI memo (21:46) Shopify’s LLM proxy and how they protect their privacy (23:00) How Shopify utilizes MCPs (26:59) Why AI tools aren’t the place to pinch pennies (30:02) Farhan’s projects and favorite AI tools (32:50) Why AI-first isn’t about freezing headcount and the value of hiring interns (36:20) How Shopify’s engineering department operates, including internal tools (40:31) Why Shopify added coding interviews for director-level and above hires (43:40) What has changed since Spotify added AI tooling  (44:40) Farhan’s advice for implementing AI tools — The Pragmatic Engineer deepdives relevant for this episode: • How Shopify built its Live Globe for Black Friday • Inside Shopify's leveling split • Real-world engineering challenges: building Cursor • How Anthropic built Artifacts — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, we’re joined by Todd Olson, co-founder and CEO of Pendo, the world’s first software experience management platform. We talk about: Offloading work from employees to digital workersWhen most people will opt to chat with an AI agent over a humanThe need for SaaS apps to transform themselves into agentic appsAdvice for serial SaaS entrepreneurs, including a big cautionary tale for startupsAI-generated and AI-maintained code and the ease of prototyping

Today, we’re joined by Ted Elliott, Chief Executive Officer of Copado, the leader in AI-powered DevOps for business applications. We talk about:  Impacts of AI agents over the next 5 yearsTed’s AI-generated Dr. Seuss book based on walks with his dogThe power of small data with AI, despite many believing more data is the answerThe challenge of being disciplined to enter only good dataGaming out SaaS company ideas with AI, such as a virtual venture capitalist

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Modal⁠ — The cloud platform for building AI applications. •⁠ Cortex⁠ — Your Portal to Engineering Excellence. — Kubernetes is the second-largest open-source project in the world. What does it actually do—and why is it so widely adopted? In this episode of The Pragmatic Engineer, I’m joined by Kat Cosgrove, who has led several Kubernetes releases. Kat has been contributing to Kubernetes for several years, and originally got involved with the project through K3s (the lightweight Kubernetes distribution). In our conversation, we discuss how Kubernetes is structured, how it scales, and how the project is managed to avoid contributor burnout. We also go deep into:  • An overview of what Kubernetes is used for • A breakdown of Kubernetes architecture: components, pods, and kubelets • Why Google built Borg, and how it evolved into Kubernetes • The benefits of large-scale open source projects—for companies, contributors, and the broader ecosystem • The size and complexity of Kubernetes—and how it’s managed • How the project protects contributors with anti-burnout policies • The size and structure of the release team • What KEPs are and how they shape Kubernetes features • Kat’s views on GenAI, and why Kubernetes blocks using AI, at least for documentation • Where Kat would like to see AI tools improve developer workflows • Getting started as a contributor to Kubernetes—and the career and networking benefits that come with it • And much more! — Timestamps (00:00) Intro (02:02) An overview of Kubernetes and who it’s for  (04:27) A quick glimpse at the architecture: Kubernetes components, pods, and cubelets (07:00) Containers vs. virtual machines  (10:02) The origins of Kubernetes  (12:30) Why Google built Borg, and why they made it an open source project (15:51) The benefits of open source projects  (17:25) The size of Kubernetes (20:55) Cluster management solutions, including different Kubernetes services (21:48) Why people contribute to Kubernetes  (25:47) The anti-burnout policies Kubernetes has in place  (29:07) Why Kubernetes is so popular (33:34) Why documentation is a good place to get started contributing to an open-source project (35:15) The structure of the Kubernetes release team  (40:55) How responsibilities shift as engineers grow into senior positions (44:37) Using a KEP to propose a new feature—and what’s next (48:20) Feature flags in Kubernetes  (52:04) Why Kat thinks most GenAI tools are scams—and why Kubernetes blocks their use (55:04) The use cases Kat would like to have AI tools for (58:20) When to use Kubernetes  (1:01:25) Getting started with Kubernetes  (1:04:24) How contributing to an open source project is a good way to build your network (1:05:51) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Backstage: an open source developer portal •⁠ How Linux is built with Greg Kroah-Hartman •⁠ Software engineers leading projects •⁠ What TPMs do and what software engineers can learn from them •⁠ Engineering career paths at Big Tech and scaleups — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, we’re joined by Tom Lavery, CEO and Founder of Jiminny, a conversation intelligence platform that captures and analyzes your critical go-to-market insights with AI. We talk about:  Getting value from unstructured dataHow quickly SaaS subscription businesses should push to be profitableTrade-offs between product-led and sales-led growthRacing to be the market leaderDangers of focusing strictly on the short-term

Supported by Our Partners •⁠ Modal⁠ — The cloud platform for building AI applications •⁠ CodeRabbit⁠⁠ — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free. — What happens when LLMs meet real-world codebases? In this episode of The Pragmatic Engineer,  I am joined by Varun Mohan, CEO and Co-Founder of Windsurf. Varun talks me through the technical challenges of building an AI-native IDE (Windsurf) —and how these tools are changing the way software gets built.  We discuss:  • What building self-driving cars taught the Windsurf team about evaluating LLMs • How LLMs for text are missing capabilities for coding like “fill in the middle” • How Windsurf optimizes for latency • Windsurf’s culture of taking bets and learning from failure • Breakthroughs that led to Cascade (agentic capabilities) • Why the Windsurf teams build their LLMs • How non-dev employees at Windsurf build custom SaaS apps – with Windsurf! • How Windsurf empowers engineers to focus on more interesting problems • The skills that will remain valuable as AI takes over more of the codebase • And much more! — Timestamps (00:00) Intro (01:37) How Windsurf tests new models (08:25) Windsurf’s origin story  (13:03) The current size and scope of Windsurf (16:04) The missing capabilities Windsurf uncovered in LLMs when used for coding (20:40) Windsurf’s work with fine-tuning inside companies  (24:00) Challenges developers face with Windsurf and similar tools as codebases scale (27:06) Windsurf’s stack and an explanation of FedRAMP compliance (29:22) How Windsurf protects latency and the problems with local data that remain unsolved (33:40) Windsurf’s processes for indexing code  (37:50) How Windsurf manages data  (40:00) The pros and cons of embedding databases  (42:15) “The split brain situation”—how Windsurf balances present and long-term  (44:10) Why Windsurf embraces failure and the learnings that come from it (46:30) Breakthroughs that fueled Cascade (48:43) The insider’s developer mode that allows Windsurf to dogfood easily  (50:00) Windsurf’s non-developer power user who routinely builds apps in Windsurf (52:40) Which SaaS products won’t likely be replaced (56:20) How engineering processes have changed at Windsurf  (1:00:01) The fatigue that goes along with being a software engineer, and how AI tools can help (1:02:58) Why Windsurf chose to fork VS Code and built a plugin for JetBrains  (1:07:15) Windsurf’s language server  (1:08:30) The current use of MCP and its shortcomings  (1:12:50) How coding used to work in C#, and how MCP may evolve  (1:14:05) Varun’s thoughts on vibe coding and the problems non-developers encounter (1:19:10) The types of engineers who will remain in demand  (1:21:10) How AI will impact the future of software development jobs and the software industry (1:24:52) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • IDEs with GenAI features that Software Engineers love • AI tooling for Software Engineers in 2024: reality check • How AI-assisted coding will change software engineering: hard truths • AI tools for software engineers, but without the hype — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well. — How do you get product and engineering to truly operate as one team? Today, I’m joined by Ebi Atawodi, Director of Product Management at YouTube Studio, and a former product leader at Netflix and Uber. Ebi was the first PM I partnered with after stepping into engineering management at Uber, and we both learned a lot together. We share lessons from our time at Uber and discuss how strong product-engineering partnerships drive better outcomes, grow teams, foster cultures of ownership, and unlock agency, innovation, and trust. In this episode, we cover: • Why you need to earn a new team's trust before trying to drive change • How practices like the "business scorecard" and “State of the Union” updates helped communicate business goals and impact to teams at Uber • How understanding business impact leads to more ideas and collaboration • A case for getting to know your team as people, not just employees • Why junior employees should have a conversation with a recruiter every six months • Ebi’s approach to solving small problems with the bet that they’ll unlock larger, more impactful solutions • Why investing time in trust and connection isn't at odds with efficiency • The qualities of the best engineers—and why they’re the same traits that make people successful in any role • The three-pronged definition of product: business impact, feasibility, and customer experience • Why you should treat your career as a project • And more! — Timestamps (00:00) Intro (02:19) The product review where Gergely first met Ebi  (05:45) Ebi’s learning about earning trust before being direct (08:01) The value of tying everything to business impact (11:53) What meetings looked like at Uber before Ebi joined (12:35) How Ebi’s influence created more of a start-up environment  (15:12) An overview of “State of the Union”  (18:06) How Ebi helped the cash team secure headcount (24:10) How a dinner out helped Ebi and Gergely work better together (28:11) Why good leaders help their employees reach their full potential (30:24) Product-minded engineers and the value of trust  (33:04) Ebi’s approach to passion in work: loving the problem, the work, and the people (36:00) How Gergely and Ebi secretly bootstrapped a project then asked for headcount (36:55) How a real problem led to a novel solution that also led to a policy change (40:30) Ebi’s approach to solving problems and tying them to a bigger value unlock  (43:58) How Ebi developed her playbooks for vision setting, fundraising, and more (45:59) Why Gergely prioritized meeting people on his trips to San Francisco  (46:50) A case for making in-person interactions more about connection (50:44) The genius-jerk archetype vs. brilliant people who struggle with social skills  (52:48) The traits of the best engineers—and why they apply to other roles, too (1:03:27) Why product leaders need to love the product and the business  (1:06:54) The value of a good PM (1:08:05) Sponsorship vs. mentorship and treating your career like a project (1:11:50) A case for playing the long game — The Pragmatic Engineer deepdives relevant for this episode: • The product-minded software engineer • Working with Product Managers as an Engineering Manager or Engineer • Working with Product Managers: advice from PMs • What is Growth Engineering? — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. •⁠ Modal⁠ — The cloud platform for building AI applications • Vanta — Automate compliance and simplify security with Vanta. — What is it like to work at Amazon as a software engineer? Dave Anderson spent over 12 years at Amazon working closely with engineers on his teams: starting as an Engineering Manager (or, SDM in Amazon lingo) and eventually becoming a Director of Engineering. In this episode, he shares a candid look into Amazon’s engineering culture—from how promotions work to why teams often run like startups. We get into the hiring process, the role of bar raisers, the pros and cons of extreme frugality, and what it takes to succeed inside one of the world’s most operationally intense companies.  We also look at how engineering actually works day to day at Amazon—from the tools teams choose to the way they organize and deliver work.  We also discuss: • The levels at Amazon, from SDE L4 to Distinguished Engineer and VP • Why engineering managers at Amazon need to write well • The “Bar Raiser” role in Amazon interview loops  • Why Amazon doesn’t care about what programming language you use in interviews • Amazon’s oncall process • The pros and cons of Amazon’s extreme frugality  • What to do if you're getting negative performance feedback • The importance of having a strong relationship with your manager • The surprising freedom Amazon teams have to choose their own stack, tools, and ways of working – and how a team chose to use Lisp (!) • Why startups love hiring former Amazon engineers • Dave’s approach to financial independence and early retirement • And more! — Timestamps (00:00) Intro (02:08) An overview of Amazon’s levels for devs and engineering managers (07:04) How promotions work for developers at Amazon, and the scope of work at each level (12:29) Why managers feel pressure to grow their teams (13:36) A step-by-step, behind-the-scenes glimpse of the hiring process  (23:40) The wide variety of tools used at Amazon (26:27) How oncall works at Amazon (32:06) The general approach to handling outages (severity 1-5) (34:40) A story from Uber illustrating the Amazon outage mindset (37:30) How VPs assist with outages (41:38) The culture of frugality at Amazon   (47:27) Amazon’s URA target—and why it’s mostly not a big deal  (53:37) How managers handle the ‘least effective’ employees (58:58) Why other companies are also cutting lower performers (59:55) Dave’s advice for engineers struggling with performance feedback  (1:04:20) Why good managers are expected to bring talent with them to a new org (1:06:21) Why startups love former Amazon engineers (1:16:09) How Dave planned for an early retirement  (1:18:10) How a LinkedIn post turned into Scarlet Ink  — The Pragmatic Engineer deepdives relevant for this episode: • Inside Amazon’s engineering culture • A day in the life of a senior manager at Amazon • Amazon’s Operational Plan process with OP1 and OP2 — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

The explosion of content in market research has created a paradox - more information but less time to consume it. Companies are now turning to AI chatbots to solve this problem, transforming how professionals interact with research data. Instead of expecting teams to read everything, these tools allow users to extract precisely what they need when they need it. This approach is proving not just more efficient but actually increases engagement with underlying content. How might your organization benefit from more targeted access to insights? What valuable information might be buried in your existing research that AI could help surface? With over 30 years of experience in marketing, media, and technology, Dan Coates is the President and co-founder of YPulse, the leading authority on Gen Z and Millennials. YPulse helps brands like Apple, Netflix, and Xbox understand and communicate with consumers aged 13–39, using data and insights from over 400,000 interviews conducted annually across seven countries. Prior to founding YPulse, Dan co-founded SurveyU, an online community and insights platform targeting youth, which merged with YPulse in 2009. He also led the introduction of Globalpark’s SAAS platform into the North American market, until its acquisition by QuestBack in 2011. In addition, Dan has held senior roles at Polimetrix, SPSS, PlanetFeedback, and Burke, where he developed cutting-edge practices and products for online marketing insights and transitioned several ventures from early stages to high-value acquisitions. In the episode, Richie and Dan explore the creation of an AI chatbot for market research, addressing customer engagement challenges, the integration of AI in content consumption, the impact of AI on business strategies, and the future of AI in market research, and much more. Links Mentioned in the Show: YPulseConnect with DanHaystack by DeepsetUnmanaged: Master the Magic of Creating Empowered and Happy Organizations by Jack SkeelsSkill Track: AI FundamentalsRelated Episode: Can You Use AI-Driven Pricing Ethically? with Jose Mendoza, Academic Director & Clinical Associate Professor at NYURewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • Vanta — Automate compliance and simplify security with Vanta. — Linux is the most widespread operating system, globally – but how is it built? Few people are better to answer this than Greg Kroah-Hartman: a Linux kernel maintainer for 25 years, and one of the 3 Linux Kernel Foundation Fellows (the other two are Linus Torvalds and Shuah Khan). Greg manages the Linux kernel’s stable releases, and is a maintainer of multiple kernel subsystems. We cover the inner workings of Linux kernel development, exploring everything from how changes get implemented to why its community-driven approach produces such reliable software. Greg shares insights about the kernel's unique trust model and makes a case for why engineers should contribute to open-source projects. We go into: • How widespread is Linux? • What is the Linux kernel responsible for – and why is it a monolith? • How does a kernel change get merged? A walkthrough • The 9-week development cycle for the Linux kernel • Testing the Linux kernel • Why is Linux so widespread? • The career benefits of open-source contribution • And much more! — Timestamps (00:00) Intro (02:23) How widespread is Linux? (06:00) The difference in complexity in different devices powered by Linux  (09:20) What is the Linux kernel? (14:00) Why trust is so important with the Linux kernel development (16:02) A walk-through of a kernel change (23:20) How Linux kernel development cycles work (29:55) The testing process at Kernel and Kernel CI  (31:55) A case for the open source development process (35:44) Linux kernel branches: Stable vs. development (38:32) Challenges of maintaining older Linux code  (40:30) How Linux handles bug fixes (44:40) The range of work Linux kernel engineers do  (48:33) Greg’s review process and its parallels with Uber’s RFC process (51:48) Linux kernel within companies like IBM (53:52) Why Linux is so widespread  (56:50) How Linux Kernel Institute runs without product managers  (1:02:01) The pros and cons of using Rust in Linux kernel  (1:09:55) How LLMs are utilized in bug fixes and coding in Linux  (1:12:13) The value of contributing to the Linux kernel or any open-source project  (1:16:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: What TPMs do and what software engineers can learn from them The past and future of modern backend practices Backstage: an open-source developer portal — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, we’re joined by Rahul Pangam, Co-Founder & CEO of RapidCanvas, a leader in delivering transformative AI-powered solutions that empower businesses to achieve faster and more impactful outcomes. We talk about: How to make GenAI more reliable: Understanding your business context & knowing why something is happeningMoving from planning based on the human gut to an AI-based setupThe coming paradigm shift from SaaS to service as a softwareInteracting with apps in plain language vs. remembering which of 56 dashboards to view

The integration of speech AI into everyday business operations is reshaping how we communicate and process information. With applications ranging from customer service to quality control, understanding the nuances of speech AI is crucial for professionals. How do you tackle the complexities of different languages and accents? What are the best practices for implementing speech AI in your organization? Explore the transformative power of speech AI and learn how to overcome the challenges it presents in your professional landscape. Alon Peleg serves as the Chief Operating Officer (COO) at aiOla, a position he assumed in May 2024. With over two decades of leadership experience at renowned companies like Wix, Cisco, and Intel, he is widely recognized in the tech industry for his expertise, dynamic leadership, and unwavering dedication. At aiOla, Alon plays a key role in driving innovation and strategic growth, contributing to the company’s mission of developing cutting-edge solutions in the tech space. His appointment is regarded as a pivotal step in aiOla’s expansion and continued success. Gill Hetz is the VP of AI at aiOla where he leverages his expertise in data integration and modeling. Gill was previously active in the oil and gas industry since 2009, holding roles in engineering, research, and data science. From 2018 to 2021, Gill held key positions at QRI, including Project Manager and SaaS Product Manager. In the episode, Richie, Alon, and Gill explore the intricacies of speech AI, its components like ASR, NLU, and TTS, real-world applications in industries such as retail and pharmaceuticals, challenges like accents and background noise, and the future of voice interfaces in technology, and much more. Links Mentioned in the Show: aiOlaConnect with Alon and GillCourse: Spoken Language Processing in PythonRelated Episode: Building Multi-Modal AI Applications with Russ d'Sa, CEO & Co-founder of LiveKitSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business