talk-data.com talk-data.com

Topic

SaaS

Software as a Service (SaaS)

cloud_computing software_delivery subscription

310

tagged

Activity Trend

23 peak/qtr
2020-Q1 2026-Q1

Activities

310 activities · Newest first

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Sonar — Code quality and code security for ALL code. — Steve Yegge⁠ is known for his writing and “rants”, including the famous “Google Platforms Rant” and the evergreen “Get that job at Google” post. He spent 7 years at Amazon and 13 at Google, as well as some time at Grab before briefly retiring from tech. Now out of retirement, he’s building AI developer tools at Sourcegraph—drawn back by the excitement of working with LLMs. He’s currently writing the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond. In this episode of The Pragmatic Engineer, I sat down with Steve in Seattle to talk about why Google consistently failed at building platforms, why AI coding feels easy but is hard to master, and why a new role, the AI Fixer, is emerging. We also dig into why he’s so energized by today’s AI tools, and how they’re changing the way software gets built. We also discuss:  • The “interview anti-loop” at Google and the problems with interviews • An inside look at how Amazon operated in the early days before microservices   • What Steve liked about working at Grab • Reflecting on the Google platforms rant and why Steve thinks Google is still terrible at building platforms • Why Steve came out of retirement • The emerging role of the “AI Fixer” in engineering teams • How AI-assisted coding is deceptively simple, but extremely difficult to steer • Steve’s advice for using AI coding tools and overcoming common challenges • Predictions about the future of developer productivity • A case for AI creating a real meritocracy  • And much more! — Timestamps (00:00) Intro (04:55) An explanation of the interview anti-loop at Google and the shortcomings of interviews (07:44) Work trials and why entry-level jobs aren’t posted for big tech companies (09:50) An overview of the difficult process of landing a job as a software engineer (15:48) Steve’s thoughts on Grab and why he loved it (20:22) Insights from the Google platforms rant that was picked up by TechCrunch (27:44) The impact of the Google platforms rant (29:40) What Steve discovered about print ads not working for Google  (31:48) What went wrong with Google+ and Wave (35:04) How Amazon has changed and what Google is doing wrong (42:50) Why Steve came out of retirement  (45:16) Insights from “the death of the junior developer” and the impact of AI (53:20) The new role Steve predicts will emerge  (54:52) Changing business cycles (56:08) Steve’s new book about vibe coding and Gergely’s experience  (59:24) Reasons people struggle with AI tools (1:02:36) What will developer productivity look like in the future (1:05:10) The cost of using coding agents  (1:07:08) Steve’s advice for vibe coding (1:09:42) How Steve used AI tools to work on his game Wyvern  (1:15:00) Why Steve thinks there will actually be more jobs for developers  (1:18:29) A comparison between game engines and AI tools (1:21:13) Why you need to learn AI now (1:30:08) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ The full circle of developer productivity with Steve Yegge •⁠ Inside Amazon’s engineering culture •⁠ Vibe coding as a software engineer •⁠ AI engineering in the real world •⁠ The AI Engineering stack •⁠ Inside Sourcegraph’s engineering culture— See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent. 

Highlights/ Skip to:

How Todd has addressed analytics apathy over the past decade at Pendo (1:17) Getting back to basics and not barraging people with more data and power (4:02) Pendo’s strategy for keeping the product experience simple without abandoning power users (6:44) Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51) What Pendo looks for when hiring product managers right now, and why (14:58) How Pendo evaluates AI product managers, specifically (19:14) How Todd Olson views AI product management compared to traditional software product management (21:56) Todd’s concerns about the probabilistic nature of AI-generated answers in the product UX (27:51) What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)   Why being able to tell what answers are best will become more important as choice increases (40:05)

Quotes from Today’s Episode

“Let’s go back to classic Geoffrey Moore Crossing the Chasm, you’re selling to early adopters. And what you’re doing is you’re relying on the early adopters’ skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn’t do anything because the market we were selling to was very, very savvy; they’re hungry people, they just like new things. They’re getting data, they’re feeling really, really smart, everything’s working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they’re not early adopters, they’re more technology laggards in some degree, and they don’t understand how to use data to inform their job. They’ve never used data to inform their job. There, we’ve had to do a lot more work.” Todd (2:04 - 2:58) “I think AI is amazing, and I don’t want to say AI is overhyped because AI in general is—yeah, it’s the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don’t. That’s not what I’m seeing. If you have a really curious product manager who’s going all in, I think you’re going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we’re seeing is a blend between design and product, that they’re always adjacent and connected; there’s more sort of overlappiness now.” Todd (22:41 - 23:28) “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it’s a fast check-in, like, a five-minute check-in, a ten-minute check-in, that’s pretty darn sticky. That’s a good metric. So, I like stickiness as a metric because it’s measuring [things like], “Are you thinking about this product a lot?” And if you’re thinking about it a lot, and like, you can’t kind of live without it, you’re going to go to it a lot, even if it’s only a few minutes a day. Social media is like that. Thankfully I’m not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That’s a pretty good metric. It gets part of my process of any products that you’re checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that’s what’s ultimately going to win here.” Todd (39:57)

Links

LinkedIn: https://www.linkedin.com/in/toddaolson/  X: https://x.com/tolson  [email protected] 

Today, we’re joined Marne Martin, the CEO of Emburse whose innovative travel and expense solutions power forward-thinking organizations. We talk about:  Building fast-moving & scalable businesses that can lastHow to finance and grow profitable companies to reach an exitThe challenges of finding a competitive edge as GenAI accelerates innovationTesting monetizing AI alongside conventional SaaS monetization

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Sonar —  Code quality and code security for ALL code.  — What happens when a company goes all in on AI? At Shopify, engineers are expected to utilize AI tools, and they’ve been doing so for longer than most. Thanks to early access to models from GitHub Copilot, OpenAI, and Anthropic, the company has had a head start in figuring out what works. In this live episode from LDX3 in London, I spoke with Farhan Thawar, VP of Engineering, about how Shopify is building with AI across the entire stack. We cover the company’s internal LLM proxy, its policy of unlimited token usage, and how interns help push the boundaries of what’s possible. In this episode, we cover: • How Shopify works closely with AI labs • The story behind Shopify’s recent Code Red • How non-engineering teams are using Cursor for vibecoding • Tobi Lütke’s viral memo and Shopify’s expectations around AI • A look inside Shopify’s LLM proxy—used for privacy, token tracking, and more • Why Shopify places no limit on AI token spending  • Why AI-first isn’t about reducing headcount—and why Shopify is hiring 1,000 interns • How Shopify’s engineering department operates and what’s changed since adopting AI tooling • Farhan’s advice for integrating AI into your workflow • And much more! — Timestamps (00:00) Intro (02:07) Shopify’s philosophy: “hire smart people and pair with them on problems” (06:22) How Shopify works with top AI labs  (08:50) The recent Code Red at Shopify (10:47) How Shopify became early users of GitHub Copilot and their pivot to trying multiple tools (12:49) The surprising ways non-engineering teams at Shopify are using Cursor (14:53) Why you have to understand code to submit a PR at Shopify (16:42) AI tools' impact on SaaS  (19:50) Tobi Lütke’s AI memo (21:46) Shopify’s LLM proxy and how they protect their privacy (23:00) How Shopify utilizes MCPs (26:59) Why AI tools aren’t the place to pinch pennies (30:02) Farhan’s projects and favorite AI tools (32:50) Why AI-first isn’t about freezing headcount and the value of hiring interns (36:20) How Shopify’s engineering department operates, including internal tools (40:31) Why Shopify added coding interviews for director-level and above hires (43:40) What has changed since Spotify added AI tooling  (44:40) Farhan’s advice for implementing AI tools — The Pragmatic Engineer deepdives relevant for this episode: • How Shopify built its Live Globe for Black Friday • Inside Shopify's leveling split • Real-world engineering challenges: building Cursor • How Anthropic built Artifacts — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

In this talk, Akshay Nair will share the journey of helping a small nonprofit team build and launch a SaaS platform designed to support career-readiness programs in public schools. From navigating performance constraints in a legacy codebase to adapting to constant developer churn, regulatory hurdles, and minimal infrastructure spend, this is a case study in getting things done without reinventing the wheel.

Rather than just focusing on metrics and optimization tricks, Akshay will talk about what “performance” looks like when the real challenges are scattered across team structure, resource limitations, and long-term sustainability. If you've ever had to do more with less, or balance people and process while still shipping, this one’s for you.

Lakeflow Connect enables you to easily and efficiently ingest data from enterprise applications like Salesforce, ServiceNow, Google Analytics, SharePoint, NetSuite, Dynamics 365 and more. In this session, we’ll dive deep on using connectors for the most popular SaaS applications, reviewing common use cases such as analyzing consumer behavior, predicting churn and centralizing HR analytics. You'll also hear from an early customer about how Lakeflow Connect helped unify their customer data to drive an improved automotive experience. We’ll wrap up with a Q&A so you have the opportunity to learn from our experts.

Sponsored by: SAP | SAP Business Data Cloud: Fuel AI with SAP data products across ERP and lines-of-business

Unlock the power of your SAP data with SAP Business Data Cloud—a fully managed SaaS solution that unifies and governs all SAP data while seamlessly connecting it with third-party data. As part of SAP Business Data Cloud, SAP Databricks brings together trusted, semantically rich business data with industry-leading capabilities in AI, machine learning, and data engineering. Discover how to access curated SAP data products across critical business processes, enrich and harmonize your data without data copies using Delta Sharing, and leverage the results across your business data fabric. See it all in action with a demonstration.

Chaos to Clarity: Secure, Scalable, and Governed SaaS Ingestion through Lakeflow Connect and more

Ingesting data from SaaS systems sounds straightforward—until you hit API limits, miss SLAs, or accidentally ingest PII. Sound familiar? In this talk, we’ll share how Databricks evolved from scrappy ingestion scripts to a unified, secure, and scalable ingestion platform. Along the way, we’ll highlight the hard lessons, the surprising pitfalls, and the tools that helped us level up. Whether you’re just starting to wrangle third-party data or looking to scale while handling governance and compliance, this session will help you think beyond pipelines and toward platform thinking.

Getting Started With Lakeflow Connect

Hundreds of customers are already ingesting data with Lakeflow Connect from SQL Server, Salesforce, ServiceNow, Google Analytics, SharePoint, PostgreSQL and more to unlock the full power of their data. Lakeflow Connect introduces built-in, no-code ingestion connectors from SaaS applications, databases and file sources to help unlock data intelligence. In this demo-packed session, you’ll learn how to ingest ready-to-use data for analytics and AI with a few clicks in the UI or a few lines of code. We’ll also demonstrate how Lakeflow Connect is fully integrated with the Databricks Data Intelligence Platform for built-in governance, observability, CI/CD, automated pipeline maintenance and more. Finally, we’ll explain how to use Lakeflow Connect in combination with downstream analytics and AI tools to tackle common business challenges and drive business impact.

Today, we’re joined by Todd Olson, co-founder and CEO of Pendo, the world’s first software experience management platform. We talk about: Offloading work from employees to digital workersWhen most people will opt to chat with an AI agent over a humanThe need for SaaS apps to transform themselves into agentic appsAdvice for serial SaaS entrepreneurs, including a big cautionary tale for startupsAI-generated and AI-maintained code and the ease of prototyping

Data Ingestion with Lakeflow Connect

In this course, you’ll learn how to have efficient data ingestion with Lakeflow Connect and manage that data. Topics include ingestion with built-in connectors for SaaS applications, databases and file sources, as well as ingestion from cloud object storage, and batch and streaming ingestion. We'll cover the new connector components, setting up the pipeline, validating the source and mapping to the destination for each type of connector. We'll also cover how to ingest data with Batch to Streaming ingestion into Delta tables, using the UI with Auto Loader, automating ETL with Lakeflow Declarative Pipelines or using the API.This will prepare you to deliver the high-quality, timely data required for AI-driven applications by enabling scalable, reliable, and real-time data ingestion pipelines. Whether you're supporting ML model training or powering real-time AI insights, these ingestion workflows form a critical foundation for successful AI implementation.Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.Labs: NoCertification Path: Databricks Certified Data Engineer Associate

Next-Gen Sales Forecasting: AI-Powered Pipeline Management | The Data Apps Conference

Sales pipeline forecasting is essential for revenue planning, but traditional approaches rely on either unstructured spreadsheets or rigid SaaS applications like Clari—creating data silos, limiting customization, and forcing teams to switch between multiple tools for complete pipeline visibility.

In this session, Oscar Bashaw (Solution Architect) will demonstrate how to:

Create a unified sales forecasting app with role-specific views for both reps and managers Implement structured data capture with input tables for consistent deal-level forecasting Consolidate multiple data sources (CRM, call recordings, product usage) into a single tool Leverage AI models from your data warehouse to provide intelligent deal insights without leaving the workflow Build dynamic visualizations with real-time pipeline coverage and attainment tracking Use AI to surface risk signals by analyzing call sentiment, deal history, and activity trends from connected data sources With Sigma, sales teams can move beyond disconnected spreadsheets and inflexible SaaS tools to create a dynamic, AI-powered forecasting solution that scales with your business. Join this session for a complete walkthrough of the app's architecture and learn how to build similar capabilities for your organization—reducing costs while improving forecast accuracy and sales team productivity.

➡️ Learn more about Data Apps: https://www.sigmacomputing.com/product/data-applications?utm_source=youtube&utm_medium=organic&utm_campaign=data_apps_conference&utm_content=pp_data_apps


➡️ Sign up for your free trial: https://www.sigmacomputing.com/go/free-trial?utm_source=youtube&utm_medium=video&utm_campaign=free_trial&utm_content=free_trial

sigma #sigmacomputing #dataanalytics #dataanalysis #businessintelligence #cloudcomputing #clouddata #datacloud #datastructures #datadriven #datadrivendecisionmaking #datadriveninsights #businessdecisions #datadrivendecisions #embeddedanalytics #cloudcomputing #SigmaAI #AI #AIdataanalytics #AIdataanalysis #GPT #dataprivacy #python #dataintelligence #moderndataarchitecture

CEO Keynote Feat. the CIO of Workato | The Data Apps Conference

The enterprise software landscape is at a pivotal turning point. For decades, organizations have been trapped in a cycle of siloed applications—first in on-premise data centers, and then repackaged as cloud SaaS solutions. While infrastructure has become more flexible, scalable, and cost-effective, the applications running on top remain frustratingly rigid, expensive, and disconnected.

In this session, Mike Palmer (CEO of Sigma) and Carter Busse (CIO of Workato) discuss the shift from "best-of-breed" point solutions to an "end-to-end" approach powered by data apps. They'll explore:

Why traditional SaaS applications force organizations to adapt their workflows to software limitations rather than the other way around How the centralization of data in cloud warehouses creates the foundation for building custom, integrated workflows Real-world examples of organizations replacing expensive, disconnected tools with purpose-built data apps The future of enterprise software, including predictions on how AI will reshape application development and data accessibility Practical strategies for starting your data apps journey without creating new technology sprawl Learn how forward-thinking organizations are using data apps to create workflows that better match their business needs, increase decision-making velocity, boost accuracy, and dramatically reduce software costs—all while maintaining enterprise-grade governance and security.

➡️ Learn more about Data Apps: https://www.sigmacomputing.com/product/data-applications?utm_source=youtube&utm_medium=organic&utm_campaign=data_apps_conference&utm_content=pp_data_apps


➡️ Sign up for your free trial: https://www.sigmacomputing.com/go/free-trial?utm_source=youtube&utm_medium=video&utm_campaign=free_trial&utm_content=free_trial

sigma #sigmacomputing #dataanalytics #dataanalysis #businessintelligence #cloudcomputing #clouddata #datacloud #datastructures #datadriven #datadrivendecisionmaking #datadriveninsights #businessdecisions #datadrivendecisions #embeddedanalytics #cloudcomputing #SigmaAI #AI #AIdataanalytics #AIdataanalysis #GPT #dataprivacy #python #dataintelligence #moderndataarchitecture

How WHOOP Scales AI-Powered Customer Support with Snowflake and Sigma Technology | Data Apps

Managing customer interactions across multiple disconnected platforms creates inefficiencies and delays in resolving support tickets. At WHOOP, support agents had to manually navigate through siloed data across payments, ERP, and ticketing systems, slowing down response times and impacting customer satisfaction.In this session, Matt Luizzi (Director of Business Analytics, WHOOP) and Brendan Farley (Sales Engineer, Snowflake) will showcase how WHOOP:

Consolidated fragmented data from multiple systems into a unified customer support app. Enabled real-time access to customer history, allowing agents to quickly surface relevant insights. Eliminated the need for custom engineering by leveraging Sigma’s no-code interface to build interactive workflows. Accelerated ticket resolution by allowing support teams to take action directly within Sigma, reducing dependency on multiple SaaS tools. Improved forecasting and decision-making by implementing AI-powered analytics on top of Snowflake. Before Sigma, getting a full view of customer issues required navigating across multiple tools—now, WHOOP’s customer support team can access, analyze, and act on real-time data in a single interface. Join us for an inside look at how WHOOP and Snowflake partnered to build a modern customer support data app that enhances efficiency and customer experience.

➡️ Learn more about Data Apps: https://www.sigmacomputing.com/product/data-applications?utm_source=youtube&utm_medium=organic&utm_campaign=data_apps_conference&utm_content=pp_data_apps


➡️ Sign up for your free trial: https://www.sigmacomputing.com/go/free-trial?utm_source=youtube&utm_medium=video&utm_campaign=free_trial&utm_content=free_trial

sigma #sigmacomputing #dataanalytics #dataanalysis #businessintelligence #cloudcomputing #clouddata #datacloud #datastructures #datadriven #datadrivendecisionmaking #datadriveninsights #businessdecisions #datadrivendecisions #embeddedanalytics #cloudcomputing #SigmaAI #AI #AIdataanalytics #AIdataanalysis #GPT #dataprivacy #python #dataintelligence #moderndataarchitecture

Today, we’re joined by Ted Elliott, Chief Executive Officer of Copado, the leader in AI-powered DevOps for business applications. We talk about:  Impacts of AI agents over the next 5 yearsTed’s AI-generated Dr. Seuss book based on walks with his dogThe power of small data with AI, despite many believing more data is the answerThe challenge of being disciplined to enter only good dataGaming out SaaS company ideas with AI, such as a virtual venture capitalist

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ Modal⁠ — The cloud platform for building AI applications. •⁠ Cortex⁠ — Your Portal to Engineering Excellence. — Kubernetes is the second-largest open-source project in the world. What does it actually do—and why is it so widely adopted? In this episode of The Pragmatic Engineer, I’m joined by Kat Cosgrove, who has led several Kubernetes releases. Kat has been contributing to Kubernetes for several years, and originally got involved with the project through K3s (the lightweight Kubernetes distribution). In our conversation, we discuss how Kubernetes is structured, how it scales, and how the project is managed to avoid contributor burnout. We also go deep into:  • An overview of what Kubernetes is used for • A breakdown of Kubernetes architecture: components, pods, and kubelets • Why Google built Borg, and how it evolved into Kubernetes • The benefits of large-scale open source projects—for companies, contributors, and the broader ecosystem • The size and complexity of Kubernetes—and how it’s managed • How the project protects contributors with anti-burnout policies • The size and structure of the release team • What KEPs are and how they shape Kubernetes features • Kat’s views on GenAI, and why Kubernetes blocks using AI, at least for documentation • Where Kat would like to see AI tools improve developer workflows • Getting started as a contributor to Kubernetes—and the career and networking benefits that come with it • And much more! — Timestamps (00:00) Intro (02:02) An overview of Kubernetes and who it’s for  (04:27) A quick glimpse at the architecture: Kubernetes components, pods, and cubelets (07:00) Containers vs. virtual machines  (10:02) The origins of Kubernetes  (12:30) Why Google built Borg, and why they made it an open source project (15:51) The benefits of open source projects  (17:25) The size of Kubernetes (20:55) Cluster management solutions, including different Kubernetes services (21:48) Why people contribute to Kubernetes  (25:47) The anti-burnout policies Kubernetes has in place  (29:07) Why Kubernetes is so popular (33:34) Why documentation is a good place to get started contributing to an open-source project (35:15) The structure of the Kubernetes release team  (40:55) How responsibilities shift as engineers grow into senior positions (44:37) Using a KEP to propose a new feature—and what’s next (48:20) Feature flags in Kubernetes  (52:04) Why Kat thinks most GenAI tools are scams—and why Kubernetes blocks their use (55:04) The use cases Kat would like to have AI tools for (58:20) When to use Kubernetes  (1:01:25) Getting started with Kubernetes  (1:04:24) How contributing to an open source project is a good way to build your network (1:05:51) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Backstage: an open source developer portal •⁠ How Linux is built with Greg Kroah-Hartman •⁠ Software engineers leading projects •⁠ What TPMs do and what software engineers can learn from them •⁠ Engineering career paths at Big Tech and scaleups — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, we’re joined by Tom Lavery, CEO and Founder of Jiminny, a conversation intelligence platform that captures and analyzes your critical go-to-market insights with AI. We talk about:  Getting value from unstructured dataHow quickly SaaS subscription businesses should push to be profitableTrade-offs between product-led and sales-led growthRacing to be the market leaderDangers of focusing strictly on the short-term

Supported by Our Partners •⁠ Modal⁠ — The cloud platform for building AI applications •⁠ CodeRabbit⁠⁠ — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free. — What happens when LLMs meet real-world codebases? In this episode of The Pragmatic Engineer,  I am joined by Varun Mohan, CEO and Co-Founder of Windsurf. Varun talks me through the technical challenges of building an AI-native IDE (Windsurf) —and how these tools are changing the way software gets built.  We discuss:  • What building self-driving cars taught the Windsurf team about evaluating LLMs • How LLMs for text are missing capabilities for coding like “fill in the middle” • How Windsurf optimizes for latency • Windsurf’s culture of taking bets and learning from failure • Breakthroughs that led to Cascade (agentic capabilities) • Why the Windsurf teams build their LLMs • How non-dev employees at Windsurf build custom SaaS apps – with Windsurf! • How Windsurf empowers engineers to focus on more interesting problems • The skills that will remain valuable as AI takes over more of the codebase • And much more! — Timestamps (00:00) Intro (01:37) How Windsurf tests new models (08:25) Windsurf’s origin story  (13:03) The current size and scope of Windsurf (16:04) The missing capabilities Windsurf uncovered in LLMs when used for coding (20:40) Windsurf’s work with fine-tuning inside companies  (24:00) Challenges developers face with Windsurf and similar tools as codebases scale (27:06) Windsurf’s stack and an explanation of FedRAMP compliance (29:22) How Windsurf protects latency and the problems with local data that remain unsolved (33:40) Windsurf’s processes for indexing code  (37:50) How Windsurf manages data  (40:00) The pros and cons of embedding databases  (42:15) “The split brain situation”—how Windsurf balances present and long-term  (44:10) Why Windsurf embraces failure and the learnings that come from it (46:30) Breakthroughs that fueled Cascade (48:43) The insider’s developer mode that allows Windsurf to dogfood easily  (50:00) Windsurf’s non-developer power user who routinely builds apps in Windsurf (52:40) Which SaaS products won’t likely be replaced (56:20) How engineering processes have changed at Windsurf  (1:00:01) The fatigue that goes along with being a software engineer, and how AI tools can help (1:02:58) Why Windsurf chose to fork VS Code and built a plugin for JetBrains  (1:07:15) Windsurf’s language server  (1:08:30) The current use of MCP and its shortcomings  (1:12:50) How coding used to work in C#, and how MCP may evolve  (1:14:05) Varun’s thoughts on vibe coding and the problems non-developers encounter (1:19:10) The types of engineers who will remain in demand  (1:21:10) How AI will impact the future of software development jobs and the software industry (1:24:52) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • IDEs with GenAI features that Software Engineers love • AI tooling for Software Engineers in 2024: reality check • How AI-assisted coding will change software engineering: hard truths • AI tools for software engineers, but without the hype — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners •⁠ WorkOS — The modern identity platform for B2B SaaS. •⁠ The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well. — How do you get product and engineering to truly operate as one team? Today, I’m joined by Ebi Atawodi, Director of Product Management at YouTube Studio, and a former product leader at Netflix and Uber. Ebi was the first PM I partnered with after stepping into engineering management at Uber, and we both learned a lot together. We share lessons from our time at Uber and discuss how strong product-engineering partnerships drive better outcomes, grow teams, foster cultures of ownership, and unlock agency, innovation, and trust. In this episode, we cover: • Why you need to earn a new team's trust before trying to drive change • How practices like the "business scorecard" and “State of the Union” updates helped communicate business goals and impact to teams at Uber • How understanding business impact leads to more ideas and collaboration • A case for getting to know your team as people, not just employees • Why junior employees should have a conversation with a recruiter every six months • Ebi’s approach to solving small problems with the bet that they’ll unlock larger, more impactful solutions • Why investing time in trust and connection isn't at odds with efficiency • The qualities of the best engineers—and why they’re the same traits that make people successful in any role • The three-pronged definition of product: business impact, feasibility, and customer experience • Why you should treat your career as a project • And more! — Timestamps (00:00) Intro (02:19) The product review where Gergely first met Ebi  (05:45) Ebi’s learning about earning trust before being direct (08:01) The value of tying everything to business impact (11:53) What meetings looked like at Uber before Ebi joined (12:35) How Ebi’s influence created more of a start-up environment  (15:12) An overview of “State of the Union”  (18:06) How Ebi helped the cash team secure headcount (24:10) How a dinner out helped Ebi and Gergely work better together (28:11) Why good leaders help their employees reach their full potential (30:24) Product-minded engineers and the value of trust  (33:04) Ebi’s approach to passion in work: loving the problem, the work, and the people (36:00) How Gergely and Ebi secretly bootstrapped a project then asked for headcount (36:55) How a real problem led to a novel solution that also led to a policy change (40:30) Ebi’s approach to solving problems and tying them to a bigger value unlock  (43:58) How Ebi developed her playbooks for vision setting, fundraising, and more (45:59) Why Gergely prioritized meeting people on his trips to San Francisco  (46:50) A case for making in-person interactions more about connection (50:44) The genius-jerk archetype vs. brilliant people who struggle with social skills  (52:48) The traits of the best engineers—and why they apply to other roles, too (1:03:27) Why product leaders need to love the product and the business  (1:06:54) The value of a good PM (1:08:05) Sponsorship vs. mentorship and treating your career like a project (1:11:50) A case for playing the long game — The Pragmatic Engineer deepdives relevant for this episode: • The product-minded software engineer • Working with Product Managers as an Engineering Manager or Engineer • Working with Product Managers: advice from PMs • What is Growth Engineering? — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. •⁠ Modal⁠ — The cloud platform for building AI applications • Vanta — Automate compliance and simplify security with Vanta. — What is it like to work at Amazon as a software engineer? Dave Anderson spent over 12 years at Amazon working closely with engineers on his teams: starting as an Engineering Manager (or, SDM in Amazon lingo) and eventually becoming a Director of Engineering. In this episode, he shares a candid look into Amazon’s engineering culture—from how promotions work to why teams often run like startups. We get into the hiring process, the role of bar raisers, the pros and cons of extreme frugality, and what it takes to succeed inside one of the world’s most operationally intense companies.  We also look at how engineering actually works day to day at Amazon—from the tools teams choose to the way they organize and deliver work.  We also discuss: • The levels at Amazon, from SDE L4 to Distinguished Engineer and VP • Why engineering managers at Amazon need to write well • The “Bar Raiser” role in Amazon interview loops  • Why Amazon doesn’t care about what programming language you use in interviews • Amazon’s oncall process • The pros and cons of Amazon’s extreme frugality  • What to do if you're getting negative performance feedback • The importance of having a strong relationship with your manager • The surprising freedom Amazon teams have to choose their own stack, tools, and ways of working – and how a team chose to use Lisp (!) • Why startups love hiring former Amazon engineers • Dave’s approach to financial independence and early retirement • And more! — Timestamps (00:00) Intro (02:08) An overview of Amazon’s levels for devs and engineering managers (07:04) How promotions work for developers at Amazon, and the scope of work at each level (12:29) Why managers feel pressure to grow their teams (13:36) A step-by-step, behind-the-scenes glimpse of the hiring process  (23:40) The wide variety of tools used at Amazon (26:27) How oncall works at Amazon (32:06) The general approach to handling outages (severity 1-5) (34:40) A story from Uber illustrating the Amazon outage mindset (37:30) How VPs assist with outages (41:38) The culture of frugality at Amazon   (47:27) Amazon’s URA target—and why it’s mostly not a big deal  (53:37) How managers handle the ‘least effective’ employees (58:58) Why other companies are also cutting lower performers (59:55) Dave’s advice for engineers struggling with performance feedback  (1:04:20) Why good managers are expected to bring talent with them to a new org (1:06:21) Why startups love former Amazon engineers (1:16:09) How Dave planned for an early retirement  (1:18:10) How a LinkedIn post turned into Scarlet Ink  — The Pragmatic Engineer deepdives relevant for this episode: • Inside Amazon’s engineering culture • A day in the life of a senior manager at Amazon • Amazon’s Operational Plan process with OP1 and OP2 — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe