talk-data.com talk-data.com

Topic

Marketing

advertising branding customer_acquisition

743

tagged

Activity Trend

49 peak/qtr
2020-Q1 2026-Q1

Activities

743 activities · Newest first

Powering Personalization with Data Science at Target with Samantha Schumacher

At Target, creating relevant guest experiences at scale takes more than great creative — it takes great data. In this session, we’ll explore how Target’s Data Science team is using first-party data, machine learning, and GenAI to personalize marketing across every touchpoint.

You’ll hear how we’re building intelligence into the content supply chain, turning unified customer signals into actionable insights, and using AI to optimize creative, timing, and messaging — all while navigating a privacy-first landscape. Whether it’s smarter segmentation or real-time decisioning, we’re designing for both scale and speed.

Redefining Marketing Measurement in the Era of Open-Source Innovation with Koel Ghosh

In a rapidly evolving advertising landscape where data, technology, and methodology converge, the pursuit of rigorous yet actionable marketing measurement is more critical—and complex—than ever. This talk will showcase how modern marketers and applied data scientists employ advanced measurement approaches—such as Marketing Mix Modeling (frequentist and Bayesian) and robust experimental designs, including randomized control trials and synthetic control-based counterfactuals—to drive causal inference in advertising effectiveness for meaningful business impact.

The talk will also address emergent aspects of applied marketing science- namely open-source methodologies, digital commerce platforms and artificial intelligence usage. Innovations from industry giants like Google and Meta, as well as open-source communities exemplified by PyMC-Marketing, have democratized access to advancement in methodologies. The emergence of digital commerce platforms such as Amazon and Walmart and the rich data they bring forward is transforming how customer journeys and campaign effectiveness are measured across channels. Artificial Intelligence is accelerating every facet of the data science workflow, streamlining processes like coding, modeling, and rapid prototyping (“vibe coding”) to enabling the integration of neural networks and deep learning techniques into traditional MMM toolkits. Collectively, these provide new and easy ways of quick experimentation and learning of complex nonlinear dynamics and hidden patterns in marketing data

Bringing these threads together, the talk will show how Ovative Group—a media and marketing technology firm—integrates domain expertise, open-source solutions, strategic partnerships, and AI automation into comprehensive measurement solutions. Attendees will gain practical insights on bridging academic rigor with business relevance, empowering careers in applied data science, and helping organizations turn marketing analytics into clear, actionable strategies.

Feedback on a first book writing: how did it happen? Why did I accept? Which depression steps have I experienced? I'll talk about relationships with the editorial team, delays, steps to finalize a book from day one, with the first lines until the delivery. I won't stop here, I'll mention marketing, advertising, printing, official release and why I will never do this again.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig enables two cultures at once: continuous shipping and experimentation. Companies like Notion went from single-digit experiments per quarter to over 300 experiments with Statsig. Start using Statsig with a generous free tier, and a $50K startup program. •⁠ Linear ⁠ — ⁠ The system for modern product development. When most companies hit real scale, they start to slow down, and are faced with “process debt.” This often hits software engineers the most. Companies switch to Linear to hit a hard reset on this process debt – ones like Scale cut their bug resolution in half after the switch. Check out Linear’s migration guide for details. — What’s it like to work as a software engineer inside one of the world’s biggest streaming companies? In this special episode recorded at Netflix’s headquarters in Los Gatos, I sit down with Elizabeth Stone, Netflix’s Chief Technology Officer. Before becoming CTO, Elizabeth led data and insights at Netflix and was VP of Science at Lyft. She brings a rare mix of technical depth, product thinking, and people leadership. We discuss what it means to be “unusually responsible” at Netflix, how engineers make decisions without layers of approval, and how the company balances autonomy with guardrails for high-stakes projects like Netflix Live. Elizabeth shares how teams self-reflect and learn from outages and failures, why Netflix doesn’t do formal performance reviews, and what new grads bring to a company known for hiring experienced engineers. This episode offers a rare inside look at how Netflix engineers build, learn, and lead at a global scale. — Timestamps (00:00) Intro (01:44) The scale of Netflix  (03:31) Production software stack (05:20) Engineering challenges in production (06:38) How the Open Connect delivery network works (08:30) From pitch to play  (11:31) How Netflix enables engineers to make decisions  (13:26) Building Netflix Live for global sports (16:25) Learnings from Paul vs. Tyson for NFL Live (17:47) Inside the control room  (20:35) What being unusually responsible looks like (24:15) Balancing team autonomy with guardrails for Live (30:55) The high talent bar and introduction of levels at Netflix (36:01) The Keeper Test   (41:27) Why engineers leave or stay  (44:27) How AI tools are used at Netflix (47:54) AI’s highest-impact use cases (50:20) What new grads add and why senior talent still matters (53:25) Open source at Netflix  (57:07) Elizabeth’s parting advice for new engineers to succeed at Netflix  — The Pragmatic Engineer deepdives relevant for this episode: • The end of the senior-only level at Netflix • Netflix revamps its compensation philosophy • Live streaming at world-record scale with Ashutosh Agrawal • Shipping to production • What is good software architecture? — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

A vendor-neutral fireside chat with John Donnellan, Senior Director of Digital Strategy, Operations & Marketing at Canon EMEA, and Andrew Hood, CEO of Lynchpin. The session focuses on practical, outcome-driven strategies to tackle the biggest challenges in retail today and to build data-driven growth—delivering actionable takeaways to boost revenue, efficiency, and customer experience.

Summary In this episode of the Data Engineering Podcast Ariel Pohoryles, head of product marketing for Boomi's data management offerings, talks about a recent survey of 300 data leaders on how organizations are investing in data to scale AI. He shares a paradox uncovered in the research: while 77% of leaders trust the data feeding their AI systems, only 50% trust their organization's data overall. Ariel explains why truly productionizing AI demands broader, continuously refreshed data with stronger automation and governance, and highlights the challenges posed by unstructured data and vector stores. The conversation covers the need to shift from manual reviews to automated pipelines, the resurgence of metadata and master data management, and the importance of guardrails, traceability, and agent governance. Ariel also predicts a growing convergence between data teams and application integration teams and advises leaders to focus on high-value use cases, aggressive pipeline automation, and cataloging and governing the coming sprawl of AI agents, all while using AI to accelerate data engineering itself.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Ariel Pohoryles about data management investments that organizations are making to enable them to scale AI implementationsInterview IntroductionHow did you get involved in the area of data management?Can you start by describing the motivation and scope of your recent survey on data management investments for AI across your respondents?What are the key takeaways that were most significant to you?The survey reveals a fascinating paradox: 77% of leaders trust the data used by their AI systems, yet only half trust their organization's overall data quality. For our data engineering audience, what does this suggest about how companies are currently sourcing data for AI? Does it imply they are using narrow, manually-curated "golden datasets," and what are the technical challenges and risks of that approach as they try to scale?The report highlights a heavy reliance on manual data quality processes, with one expert noting companies feel it's "not reliable to fully automate validation" for external or customer data. At the same time, maturity in "Automated tools for data integration and cleansing" is low, at only 42%. What specific technical hurdles or organizational inertia are preventing teams from adopting more automation in their data quality and integration pipelines?There was a significant point made that with generative AI, "biases can scale much faster," making automated governance essential. From a data engineering perspective, how does the data management strategy need to evolve to support generative AI versus traditional ML models? What new types of data quality checks, lineage tracking, or monitoring for feedback loops are required when the model itself is generating new content based on its own outputs?The report champions a "centralized data management platform" as the "connective tissue" for reliable AI. How do you see the scale and data maturity impacting the realities of that effort?How do architectural patterns in the shape of cloud warehouses, lakehouses, data mesh, data products, etc. factor into that need for centralized/unified platforms?A surprising finding was that a third of respondents have not fully grasped the risk of significant inaccuracies in their AI models if they fail to prioritize data management. In your experience, what are the biggest blind spots for data and analytics leaders?Looking at the maturity charts, companies rate themselves highly on "Developing a data management strategy" (65%) but lag significantly in areas like "Automated tools for data integration and cleansing" (42%) and "Conducting bias-detection audits" (24%). If you were advising a data engineering team lead based on these findings, what would you tell them to prioritize in the next 6-12 months to bridge the gap between strategy and a truly scalable, trustworthy data foundation for AI?The report states that 83% of companies expect to integrate more data sources for their AI in the next year. For a data engineer on the ground, what is the most important capability they need to build into their platform to handle this influx?What are the most interesting, innovative, or unexpected ways that you have seen teams addressing the new and accelerated data needs for AI applications?What are some of the noteworthy trends or predictions that you have for the near-term future of the impact that AI is having or will have on data teams and systems?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links BoomiData ManagementIntegration & Automation DemoAgentstudioData Connector Agent WebinarSurvey ResultsData GovernanceShadow ITPodcast EpisodeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here. •⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out. — Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications. I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is! We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works. — Timestamps (00:00) Intro (02:35) Compilers in the early 2000s (04:48) Why Chris built LLVM (08:24) GCC vs. LLVM (09:47) LLVM at Apple  (19:25) How Chris got support to go open source at Apple (20:28) The story of Swift  (24:32) The process for designing a language  (31:00) Learnings from launching Swift  (35:48) Swift Playgrounds: making coding accessible (40:23) What Swift solved and the technical debt it created (47:28) AI learnings from Google and Tesla  (51:23) SiFive: learning about hardware engineering (52:24) Mojo’s origin story (57:15) Modular’s bet on a two-level stack (1:01:49) Compiler shortcomings (1:09:11) Getting started with Mojo  (1:15:44) How big is Modular, as a company? (1:19:00) AI coding tools the Modular team uses  (1:22:59) What kind of software engineers Modular hires  (1:25:22) A programming language for LLMs? No thanks (1:29:06) Why you should study and understand compilers — The Pragmatic Engineer deepdives relevant for this episode: •⁠ AI Engineering in the real world • The AI Engineering stack • Uber's crazy YOLO app rewrite, from the front seat • Python, Go, Rust, TypeScript and AI with Armin Ronacher • Microsoft’s developer tools roots — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Bridging the Gap: Using Generative AI for Audience Insight & Segmentation

Seats are limited to 16 attendees. Register here to save your spot. 

https://www.snowflake.com/event/marketing-data-stack-roundtable-swt-amsterdam-2025/

This roundtable explores how generative AI (GenAI) is revolutionizing audience segmentation and insights. The discussion will focus on practical, in-the-moment applications that empower marketers and media professionals to move beyond static data analysis. We will examine how GenAI tools, like those available natively on Snowflake Cortex, can translate complex data filters into rich, narrative-driven audience descriptions. 

The conversation will also highlight how GenAI capabilities streamline workflows by allowing users to build audience segments using natural language, democratizing access to data and accelerating decision-making. The goal is to provide a clear, concise, and actionable understanding of how GenAI is bridging the gap between raw data and powerful, human-centric insights.

Join Toyota Motor Europe to discover their journey towards a fully operationalized Data Mesh with dbt and Snowflake.

TME (Toyota Motos Europe), one of biggest automobile manufacturing companies, oversees the wholesale sales and marketing of Toyota and Lexus vehicles in Europe. This session will showcase how dbt Cloud and Snowflake are supporting their data strategy.

They will elaborate on challenges faced along the way, and how their platform is supporting their future vision, e.g. enabling advanced real-time analytics, scaling while maintaining governance and best practices and setting themselves up with a strong data foundation to launch their AI/ML initiatives.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear – The system for modern product development. — Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development. In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works. — Timestamps (00:00) Intro (02:17) Vibe coding vs. AI-assisted engineering (06:07) How Addy uses AI tools (13:10) Addy’s learnings about applying AI for development (18:47) Addy’s favorite tools (22:15) The 70% Problem (28:15) Tactics for efficient LLM usage (32:58) How AI tools evolved (34:29) The case for keeping expectations low and control high (38:05) Autonomous agents and working with them (42:49) How the EM and PM role changes with AI (47:14) The rise of new roles and shifts in developer education (48:11) The importance of critical thinking when working with AI (54:08) LLMs as a tool for learning (1:03:50) Rapid questions — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Vibe Coding as a software engineer •⁠ How AI-assisted coding will change software engineering: hard truths •⁠ AI Engineering in the real world •⁠ The AI Engineering stack •⁠ How Claude Code is built — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

The journey from startup to billion-dollar enterprise requires more than just a great product—it demands strategic alignment between sales and marketing. How do you identify your ideal customer profile when you're just starting out? What data signals help you find the twins of your successful early adopters? With AI now automating everything from competitive analysis to content creation, the traditional boundaries between departments are blurring. But what personality traits should you look for when building teams that can scale with your growth? And how do you ensure your data strategy supports rather than hinders your AI ambitions in this rapidly evolving landscape? Denise Persson is CMO at Snowflake and has 20 years of technology marketing experience at high-growth companies. Prior to joining Snowflake, she served as CMO for Apigee, an API platform company that went public in 2015 and Google acquired in 2016. She began her career at collaboration software company Genesys, where she built and led a global marketing organization. Denise also helped lead Genesys through its expansion to become a successful IPO and acquired company. Denise holds a BA in Business Administration and Economics from Stockholm University, and holds an MBA from Georgetown University. Chris Degnan is the former CRO at Snowflake and has over 15 years of enterprise technology sales experience. Before working at Snowflake, Chris served as the AVP of the West at EMC, and prior to that as VP Western Region at Aveksa, where he helped grow the business 250% year-over-year. Before Aveksa, Chris spent eight years at EMC and managed a team responsible for 175 select accounts. Prior to EMC, Chris worked in enterprise sales at Informatica and Covalent Technologies (acquired by VMware). He holds a BA from the University of Delaware. In the episode, Richie, Denise, and Chris explore the journey to a billion-dollar ARR, the importance of customer obsession, aligning sales and marketing, leveraging data for decision-making, and the role of AI in scaling operations, and much more. Links Mentioned in the Show: SnowflakeSnowflake BUILDConnect with Denise and ChrisSnowflake is FREE on DataCamp this weekRelated Episode: Adding AI to the Data Warehouse with Sridhar Ramaswamy, CEO at SnowflakeRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Something interesting is happening with the latest generation of tech giants. Rather than building advanced experimentation tools themselves, companies like Anthropic, Figma, Notion and a bunch of others… are just using Statsig. Statsig has rebuilt this entire suite of data tools that was available at maybe 10 or 15 giants until now. Check out Statsig. •⁠ Linear – The system for modern product development. Linear is just so fast to use – and it enables velocity in product workflows. Companies like Perplexity and OpenAI have already switched over, because simplicity scales. Go ahead and check out Linear and see why it feels like a breeze to use. — What is it really like to be an engineer at Google? In this special deep dive episode, we unpack how engineering at Google actually works. We spent months researching the engineering culture of the search giant, and talked with 20+ current and former Googlers to bring you this deepdive with Elin Nilsson, tech industry researcher for The Pragmatic Engineer and a former Google intern. Google has always been an engineering-driven organization. We talk about its custom stack and tools, the design-doc culture, and the performance and promotion systems that define career growth. We also explore the culture that feels built for engineers: generous perks, a surprisingly light on-call setup often considered the best in the industry, and a deep focus on solving technical problems at scale. If you are thinking about applying to Google or are curious about how the company’s engineering culture has evolved, this episode takes a clear look at what it was like to work at Google in the past versus today, and who is a good fit for today’s Google. Jump to interesting parts: (13:50) Tech stack (1:05:08) Performance reviews (GRAD) (2:07:03) The culture of continuously rewriting things — Timestamps (00:00) Intro (01:44) Stats about Google (11:41) The shared culture across Google (13:50) Tech stack (34:33) Internal developer tools and monorepo (43:17) The downsides of having so many internal tools at Google (45:29) Perks (55:37) Engineering roles (1:02:32) Levels at Google  (1:05:08) Performance reviews (GRAD) (1:13:05) Readability (1:16:18) Promotions (1:25:46) Design docs (1:32:30) OKRs (1:44:43) Googlers, Nooglers, ReGooglers (1:57:27) Google Cloud (2:03:49) Internal transfers (2:07:03) Rewrites (2:10:19) Open source (2:14:57) Culture shift (2:31:10) Making the most of Google, as an engineer (2:39:25) Landing a job at Google — The Pragmatic Engineer deepdives relevant for this episode: •⁠ Inside Google’s engineering culture •⁠ Oncall at Google •⁠ Performance calibrations at tech companies •⁠ Promotions and tooling at Google •⁠ How Kubernetes is built •⁠ The man behind the Big Tech comics: Google cartoonist Manu Cornet — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Data interviews do not have to feel messy. In this episode, I share a simple AI Interview Copilot that works for data analyst, data scientist, analytics engineer, product analyst, and marketing analyst roles. What you will learn today: How to Turn a Job Post into a Skills Map: Know Exactly What to Study First.How to build role-specific SQL drills (joins, window functions, cohorts, retention, time series).How to practice product/case questions that end with a decision and a metric you can defend.How to prepare ML/experimentation basics (problem framing, features, success metrics, A/B test sanity checks).How to plan take-home assignments (scope, assumptions, readable notebook/report structure).How to create a 6-story STAR bank with real numbers and clear outcomes.How to follow a 7-day rhythm so you make steady progress without burnout.How to keep proof of progress so your confidence comes from evidence, not hope.Copy-and-use prompts from the show: JD → Skills Map: “Parse this job post. Table: Skill/Theme | Where mentioned | My level (guess) | Study action | Likely interview questions. Then give 5 bullets: what they are really hiring for.”SQL Drill Factory (Analyst/Product/Marketing): “Create 20 SQL tasks + hint + how to check results using orders, users, events, campaigns. Emphasize joins, windows, conditional agg, cohorts, funnels, retention, time windows.”Case Coach (Data/Product): “Run a 15-minute case: key metric is down. Ask one question at a time. Score clarity, structure, metrics, trade-offs. End with gaps + practice list.”ML/Experimentation Basics (Data Science): “Create a 7-step outline for framing a modeling problem (goal, data, features, baseline, evaluation, risks, comms). Add an A/B test sanity checklist (power, SRM, population, metric guardrails).”Take-Home Planner: “Given this brief, propose scope, data assumptions, 3–5 analysis steps, visuals, and a short results section. Output a clear report outline.”Behavioral STAR Bank: “Draft 6 STAR stories (120s) for conflict, ambiguity, failure, leadership without title, stakeholder influence, measurable impact. Put numbers in Results.”

This panel session will showcase how leading organisations are achieving unprecedented marketing success by mastering their data. Our experts will share real-world examples and best practices for architecting a unified, modern marketing data stack to create a single source of truth and get from data to intelligent action. Join us to discover how to leverage this unified data to optimise campaigns, personalise customer experiences at scale, and ultimately drive significant data-driven business growth, as our panelists reveal how to transform your marketing efforts with a data-first approach.

Data clean rooms are rapidly evolving from privacy-preserving data collaboration environments into powerful engines for AI- and ML-driven insights. This session will dive into the technical architecture and product capabilities enabling clean rooms to support advanced use cases across the marketing lifecycle — from identity resolution and lookalike modeling to cross-channel attribution and real-time optimization. Beyond advertising, we’ll examine how these innovations are scaling into verticals like retail and healthcare, where secure data collaboration is unlocking next-gen personalization, predictive analytics and clinical insights.

What happens when marketing teams spend countless hours on manual campaign analysis while missing critical market opportunities? In this session, discover how AI is transforming marketing from a cost centre into a revenue-driving powerhouse. You'll see how Snowflake's Cortex AI enables marketers to automatically classify campaign assets, analyse multimodal performance data, and generate personalised content at scale—all without waiting for post-campaign analysis. This is marketing analytics reimagined—where AI democratizes data science, accelerates decision-making, and turns every campaign into a learning opportunity that drives immediate business impact.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Most teams end up in this situation: ship a feature to 10% of users, wait a week, check three different tools, try to correlate the data, and you’re still unsure if it worked. The problem is that each tool has its own user identification and segmentation logic. Statsig solved this problem by building everything within a unified platform. Check out Statsig. •⁠ Linear – The system for modern product development. In the episode, Armin talks about how he uses an army of “AI interns” at his startup. With Linear, you can easily do the same: Linear’s Cursor integration lets you add Cursor as an agent to your workspace. This agent then works alongside you and your team to make code changes or answer questions. You’ve got to try it out: give Linear a spin and see how it integrates with Cursor. — Armin Ronacher is the creator of the Flask framework for Python, was one of the first engineers hired at Sentry, and now the co-founder of a new startup. He has spent his career thinking deeply about how tools shape the way we build software. In this episode of The Pragmatic Engineer Podcast, he joins me to talk about how programming languages compare, why Rust may not be ideal for early-stage startups, and how AI tools are transforming the way engineers work. Armin shares his view on what continues to make certain languages worth learning, and how agentic coding is driving people to work more, sometimes to their own detriment.  We also discuss:  • Why the Python 2 to 3 migration was more challenging than expected • How Python, Go, Rust, and TypeScript stack up for different kinds of work  • How AI tools are changing the need for unified codebases • What Armin learned about error handling from his time at Sentry • And much more  Jump to interesting parts: • (06:53) How Python, Go, and Rust stack up and when to use each one • (30:08) Why Armin has changed his mind about AI tools • (50:32) How important are language choices from an error-handling perspective? — Timestamps (00:00) Intro (01:34) Why the Python 2 to 3 migration created so many challenges (06:53) How Python, Go, and Rust stack up and when to use each one (08:35) The friction points that make Rust a bad fit for startups (12:28) How Armin thinks about choosing a language for building a startup (22:33) How AI is impacting the need for unified code bases (24:19) The use cases where AI coding tools excel  (30:08) Why Armin has changed his mind about AI tools (38:04) Why different programming languages still matter but may not in an AI-driven future (42:13) Why agentic coding is driving people to work more and why that’s not always good (47:41) Armin’s error-handling takeaways from working at Sentry  (50:32) How important is language choice from an error-handling perspective (56:02) Why the current SDLC still doesn’t prioritize error handling  (1:04:18) The challenges language designers face  (1:05:40) What Armin learned from working in startups and who thrives in that environment (1:11:39) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode:

— Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

L'ère de l'IA Agentique est là, et des agents autonomes redéfinissent déjà les règles du marketing en agissant à votre place. Cette révolution se heurte cependant à un mur : vos données, éparpillées en silos, rendent ces agents aveugles et inefficaces. Snowflake n'est pas juste une base de données, c'est le système d'exploitation de cette nouvelle ère. En unifiant toutes vos données en une source de vérité unique et en fournissant les outils IA intégrés pour construire des agents intelligents et gouvernés, nous transformons votre infrastructure en un avantage stratégique.

Les Data Clean Rooms évoluent rapidement, passant d'environnements de collaboration de données préservant la confidentialité à de puissants moteurs pour des informations basées sur l'IA et le ML. Cette session plongera dans l'architecture technique et les capacités produit permettant aux Cleans Rooms de prendre en charge des cas d'utilisation avancés tout au long du cycle de vie marketing – de la résolution d'identité et de la modélisation de « look-alike » à l'attribution multi-canal et à l'optimisation en temps réel. Au-delà de la publicité, nous examinerons comment ces innovations se développent dans des secteurs verticaux comme le retail et la santé, où la collaboration sécurisée des données débloque une personnalisation de nouvelle génération, une analyse prédictive et des informations cliniques.

Michelin Lifestyle a repensé son engagement client en alliant data et IA, marketing et

tech pour bâtir un écosystème digital innovant. Découvrez comment ce projet data

marketing ambitieux a permis à Michelin d’apporter de la valeur, rapidement et

durablement, grâce à la puissance du partenariat imagino et Snowflake.