talk-data.com talk-data.com

Topic

KPI

Key Performance Indicator (KPI)

metrics performance_measurement business_analytics

109

tagged

Activity Trend

8 peak/qtr
2020-Q1 2026-Q1

Activities

109 activities · Newest first

Microsoft Power BI Quick Start Guide - Fourth Edition

Bring your data to life with the ultimate beginner's guide to Power BI, now featuring Microsoft Fabric, Copilot, and full-color visuals to make learning data modeling, storytelling, and dashboards easier and faster than ever Key Features Build data literacy and gain confidence using Power BI through real-world, beginner-friendly examples Learn to shape, clean, and model data using Power BI Desktop and Power Query, with zero experience required Build vibrant, accurate reports and dashboards with real-world modeling examples Book Description Updated with the latest innovations in Power BI, including integration with Microsoft Fabric for seamless data unification and Copilot for AI-powered guidance. This comprehensive guide empowers you to build compelling reports and dashboards from the ground up. Whether you're new to Power BI or stepping into a data role, this book provides a friendly, approachable introduction to business intelligence and data storytelling You'll start with the Power BI Desktop interface and its core functionality, then move into shaping and cleaning your data using the Power Query Editor. From designing intuitive data models to writing your first DAX formulas, you’ll develop practical skills that apply directly to real-world scenarios. he book emphasizes how to use visualizations and narrative techniques to turn numbers into meaningful insights The chapters focus on hands-on, real-world examples—like analyzing sales trends, tracking KPIs, and cleaning messy data. You'll learn to build and refresh reports, scale your Power BI setup, and enhance your solutions using Microsoft Fabric and Copilot. Fabric unifies analytics across your organization, while Copilot speeds up your workflow with AI-driven insights and report suggestions By the end of the book, you’ll have the confidence and experience to turn raw data into insightful, impactful dashboards What you will learn Understand why data literacy matters in decision-making and careers Connect to data using import, DirectQuery, and live connection modes Clean and transform data using Power Query Editor and dataflows Design reports with visuals that support clear data storytelling Apply row-level security to enforce access and data protection Manage and monitor Power BI cloud for scalability and teamwork Use AI tools like Copilot to speed up prep and generate insights Learn Microsoft Fabric basics to enable unified data experiences Who this book is for This book is ideal for anyone looking to build a solid foundation in Power BI, regardless of prior experience. Whether you're just starting out or stepping into a new role that involves data, you'll find clear, approachable guidance throughout. The step-by-step tutorials and real-world examples make it easy to follow along—even if it’s your first time working with business intelligence tools

Financial Modeling and Reporting with Microsoft Power BI

Design powerful financial reports in Power BI by building models, measures, and dashboards tailored for real-world accounting and analytics Key Features Build a complete financial data model from ledgers, journals, and budgets Master DAX for income statements, KPIs, and performance analysis Learn Power BI Paginated and AI tools for printable and predictive reporting Purchase of the print or Kindle book includes a free PDF eBook Book Description Power BI for Financial Reporting is the definitive guide to designing high-performance, flexible, and insightful financial reports using Power BI. This book empowers finance and BI professionals to create everything from trial balances to enterprise-wide performance dashboards with ease and precision. The book starts by helping you define your reporting goals and data sources, mapping these needs to Power BI’s capabilities. You’ll then build a core financial data model—covering ledger transactions, charts of accounts, and multi-company support. As you proceed, you’ll integrate complex DAX measures, handle foreign exchange and journal entries, and extend your model with budgeting and inventory data. Each chapter builds toward a comprehensive suite of reports, complete with visual best practices and tested metrics. You’ll learn to streamline datasets using Power Query, test for data integrity, and generate printable reports via Power BI Paginated. The final chapters dive into using AI, predictive analytics, and Microsoft Fabric to future-proof your reporting. Whether you're consolidating data across systems or evolving your reports for changing business needs, this hands-on guide ensures you’re prepared to meet the demands of modern finance. What you will learn Build core financial models from ledgers and accounts Create Trial Balance and Income Statements using DAX Optimize Power BI with Power Query and data transformation Add budgets, targets, and KPIs to performance dashboards Integrate inventory data for nuanced stock reporting Produce printable reports using Power BI Paginated Apply AI for report generation and predictive analytics Test, tune, and evolve reports for secure, scalable use Who this book is for This book is for finance professionals, accountants, financial analysts, and BI developers who want to leverage Power BI to improve, automate, and future-proof their financial reporting. Whether consolidating data from ERPs, building reports across entities, or exploring advanced Power BI features, this book equips readers with practical skills and strategic insight.

Organizations continue to struggle to prove the value of agentic analytics initiatives, often due to a disconnect in AI's ability to connect and interpret operational metrics. To unlock the full value of AI, you need a robust metrics framework. A metrics framework provides a structured approach to measure success by aligning high-level strategy with daily operations, enabling both human and agentic data-driven decision-making. By distilling strategy into clear, simple, actionable KPIs, these frameworks enhance transparency and yield insights for strategic recommendations and measurable business value.

Devenir data-driven ne se résume pas à des outils : c’est avant tout une transformation culturelle et collaborative, qui implique de changer les réflexes de travail. Chez ENGIE, nous mettons la donnée au service de la transformation du groupe et de la transition énergétique. Venez découvrir comment la Networks GBU d’ENGIE relève ce défi collectif, entre data gouvernance, alignement sur les KPIs et émergence d’une culture data.

Power BI for Finance

Build effective data models and reports in Power BI for financial planning, budgeting, and valuations with practical templates, logic, and step-by-step guidance. Free with your book: DRM-free PDF version + access to Packt's next-gen Reader Key Features Engineer optimal star schema data models for financial planning and analysis Implement common financial logic, calendars, and variance calculations Create dynamic, formatted reports for income statements, balance sheets, and cash flow Purchase of the print or Kindle book includes a free PDF eBook Book Description Martin Kratky brings his global experience of over 20 years as co-founder of Managility and creator of Acterys to empower CFOs and accountants with Power BI for Finance through this hands-on guide to streamlining and enhancing financial processes. Starting with the foundation of every effective BI solution, a well-designed data model, the book shows you how to structure star schemas and integrate common financial data sources like ERP and accounting systems. You’ll then learn to implement key financial logic using DAX and M, covering calendars, KPIs, and variance calculations. The book offers practical advice on creating clear and compliant financial reports, such as income statements, balance sheets, and cash flows with visual design and formatting best practices. With dedicated chapters on advanced workflows, you’ll learn how to handle multi-currency setups, perform group consolidations, and implement planning models like rolling forecasts, annual budgets, and sales and operations planning (S&OP). As you advance, you’ll gain insights from real-world case studies covering company valuations, Excel integration, and the use of write-back methods with Dynamics Business Performance Planning and Acterys. The concluding chapters highlight how AI and Copilot enhance financial analytics. Email sign-up and proof of purchase required What you will learn Apply multi-currency handling and group consolidation techniques in Power BI Model discounted cash flow and company valuation scenarios Design and manage write-back workflows with Dynamics BPP and Acterys Integrate Excel and Power BI using live connections and cube formulas Utilize AI, Copilot, and LLMs to enhance automation and insight generation Create complete finance-focused dashboards for sales and operations planning Who this book is for This book is for finance professionals including CFOs, FP&A managers, controllers, and certified accountants who want to enhance reporting, planning, and forecasting using Power BI. Basic familiarity with Power BI and financial concepts is recommended to get the most out of this hands-on guide.

Radical Self-Service: Synthesia’s decision-making transformation with Omni and Snowflake When Ed Mancey joined Synthesia as the first data hire, the company had no centralized data stack, conflicting KPIs, and zero self-service. Within a few months, he rolled out Omni on top of Snowflake — enabling business teams to answer their own questions and make better decisions. Now, the sales development team is improving outbound efficiency, sales managers are increasing pipeline conversion, and board reporting runs entirely in Omni. In this session, Edward Mancey will share how they scaled analytics across the business by creating clear lines of responsibility and empowering their business users to move fast and build what they need.

Getting Started with Taipy

Share your machine learning models, create chatbots, as well as build and deploy insightful dashboards speedily using Taipy with this hands-on book featuring real-world application examples from multiple industries Free with your book: DRM-free PDF version + access to Packt's next-gen Reader Key Features Create visually compelling, interactive data applications with Taipy Bring predictive models to end users and create data pipelines to compare scenarios with what-if analyses Go beyond prototypes to build and deploy production-ready applications using the cloud provider of your choice Purchase of the print or Kindle book includes a free PDF eBook in full color Book Description While data analysts, data scientists, and BI experts have the tools to analyze data, build models, and create compelling visuals, they often struggle to translate these insights into practical, user-friendly applications that help end users answer real-world questions, such as identifying revenue trends, predicting inventory needs, or detecting fraud, without wading through complex code. This book is a comprehensive guide to overcoming this challenge. This book teaches you how to use Taipy, a powerful open-source Python library, to build intuitive, production-ready data apps quickly and efficiently. Instead of creating prototypes that nobody uses, you'll learn how to build faster applications that process large amounts of data for multiple users and deliver measurable business impact. Taipy does the heavy lifting to enable your users to visualize their KPIs, interact with charts and maps, and compare scenarios for better decision-making. You’ll learn to use Taipy to build apps that make your data accessible and actionable in production environments like the cloud or Docker. By the end of this book, you won’t just understand Taipy, you'll be able to transform your data skills into impactful solutions that address real-world needs and deliver valuable insights. Email sign-up and proof of purchase required What you will learn Explore Taipy, its use cases, and how it's different from other projects Discover how to create visually appealing interactive apps, display KPIs, charts, and maps Understand how to compare scenarios to make better decisions Connect Taipy applications to several data sources and services Develop apps for diverse use cases, including chatbots, dashboards, ML apps, and maps Deploy Taipy applications on different types of servers and services Master advanced concepts for simplifying and accelerating your development workflow Who this book is for If you’re a data analyst, data scientist, or BI analyst looking to build production-ready data apps entirely in Python, this book is for you. If your scripts and models sit idle because non-technical stakeholders can’t use them, this book shows you how to turn them into full applications fast with Taipy, so your work delivers real business value. It’s also valuable for developers and engineers who want to streamline their data workflows and build UIs in pure Python.

tesa SE is a global adhesive manufacturing company. In their highly automated tape production process it's needed to observe quality and effiency abnormal events with very short latency to avoid high costs.

Utilizing Snowflake's Machine Learning capabilities, Tesa SE is monitoring various KPI's that indicate the correct production process.

Tesa's newest innovative usecase aims to decrease waste during the production process using anomaly detection methodologies, which are trained on Snowflake, and used for inference on-edge for optimal latency.

The machine learning model pipeline components are built and served leveraging Snowflake features such as Snowflake CLI, Snowpark Pandas and other Snowflake capabilities to streamline the overarching ML process.

Chez Doctolib, la gouvernance data ne se limite pas à la conformité: elle soutient activement notre stratégie d’entreprise. Dans cette session, Diana Carrondo, Data Governance Lead chez Doctolib, et Tristan Mayer, General Manager Catalog chez Coalesce, partageront comment un nouveau data catalog a permis de déployer une approche de gouvernance offensive. Vous découvrirez comment Doctolib a dépassé les limites de son ancien outil en améliorant l’adoption, en structurant la taxonomie pour mieux protéger les données, en intégrant le catalog dans ses KPIs data governance et en le connectant à ses outils IA internes. Un retour d’expérience concret pour transformer votre catalog en levier stratégique.

Le processus d’allocation des coûts dans Alteryx s’appuie sur des workflows automatisés pour répartir avec précision les coûts des ETP entre les centres d’activités, selon des critères prédéfinis tels que les effectifs ou les volumes, en respectant des règles spécifiques d’allocation des coûts et de calcul des KPI.

1. Ingestion et préparation des données

Alteryx se connecte à plusieurs sources (par exemple, ERP, CRM, stockage cloud) pour extraire les données liées aux ETP, aux coûts et aux volumes. Le processus agrège, prépare et aligne ces ensembles de données disparates afin de créer une base de coûts unifiée.

2.Amélioration de la qualité des données

Des règles de transformation dynamiques sont appliquées pour garantir la cohérence, supprimer les doublons, gérer les valeurs manquantes et standardiser les types de données. Des outils de profilage des données offrent une visibilité sur les anomalies et valeurs aberrantes susceptibles d’impacter la logique d’allocation.

3. Logique d’allocation des coûts

Cela permet de définir des règles d’allocation flexibles et des étapes de validation — allant de ratios simples à des règles dynamiques dictées par les besoins métiers — en fonction des moteurs de coûts, ETP et volumes, pour garantir l’exactitude des calculs de KPI.

4. Intégration de l’IA générative

Les fonctions d’IA générative (par exemple via OpenAI ou les outils Gen AI d’Alteryx) renforcent le workflow en permettant :

La génération automatique de schémas de données adaptés à un format cible.

L’assistance via un outil Copilot pour créer des transformations à partir d’instructions en langage naturel.

La création de règles d’allocation dynamiques.

5. Sortie et visualisation

Les allocations finales peuvent être exportées vers des outils de reporting, des tableaux de bord ou des data lakes. Les utilisateurs peuvent consulter des synthèses d’allocation, des écarts et des vues détaillées pour appuyer la prise de décision via des applications analytiques personnalisées.

Path to Stellar Business Performance Analysis : A Design and Implementation Handbook

Business performance analysis is central to any business, as it helps to make or mend products, services, and processes. This book provides several blueprints for setting up business performance analytics (BPA) shops, from process layout for performance measures to tracking the underlying metrics of them using website tools such as Google Analytics and Looker Studio. Delivering satisfying user experiences in the context of overarching business objectives is key to delivering elevated business performance. This book transcends the topic of tracking user behaviors in websites from generic to specific KPI scenario-based tracking using Google Analytics/Google Tag Manager. Business Performance Analysis stands out by helping you create fit-for-purpose and coherent performance analysis blueprints by integrating performance measure creation and website analytics of BPA together. What You Will Learn Design a Business Performance Analysis function Analyze performance metrics with website analytics tools Identify business performance metrics for common product scenarios Who This Book is For Senior leaders, product managers, product owners, UX and web analytics professionals

Face To Face
by Siddharth Rajagopal (Data as the Fourth Pillar) , Sujay Dutta (Data as the Fourth Pillar)

Reason why Data should be the Fourth Pillar for every enterprise. The Board, CEOs, and CxOs must understand why they should treat data strategically. Enterprises’ use cases like AI drive the need for data high in quality, compliance, and speed dimensions. 

- Present a framework for enterprises to understand their current data challenges. 

- Key principles for the data pillar 

- Role of the Chief Data Officer (CDO) - nurture demand for data while taking steps to fulfill the supply of demand through an agile data operating model (DOM). The DOM enabled by people, processes, and technologies. 

- Measuring the impact provided by the data pillar, introduce KPIs such as Total Addressable Value through data (TAV) and Expected Addressable Value through data (EAV). 

- A Maturity Framework for every enterprise to track and progress its data maturity journey.

Paneldiscussie over model collapse: het onopgemerkt verslechteren van AI-/ML-modellen met risico’s voor betrouwbaarheid, besluitvorming en business. Experts zoals Maartje Vennema, Sako Arts en Simon Koolstra bespreken oorzaken, monitoring, metrics, KPI’s, tooling, impact en verdedigingsstrategieën. Deze sessie is een must-see voor data-professionals die werken met governance, risk & compliance in AI.

In this episode, we'll chat with Carly Taylor, Field CTO of Gaming at Databricks, to explore the fascinating world of data analytics in the gaming industry, where every click, quest, and respawn generates insights that shape the games we love. Carly shares her experience working in gaming to help harness data for better gameplay and smarter monetization. She'll break down what analysts, data scientists, and sales engineers actually do in gaming and how teams turn raw data into real-time decisions. Whether you're a player, a data nerd, or someone who wants to turn both into a career, this episode is your walkthrough guide to data in gaming. What You'll Learn: How gaming companies use data to optimize player experience and business outcomes  What it's like to work in a field engineering or customer-facing analyst role The tools, KPIs, and best practices for success How to break into a data role in gaming and what skills to focus on   Stay updated with Carly's latest by subscribing to her Substack   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent. 

Highlights/ Skip to:

How Todd has addressed analytics apathy over the past decade at Pendo (1:17) Getting back to basics and not barraging people with more data and power (4:02) Pendo’s strategy for keeping the product experience simple without abandoning power users (6:44) Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51) What Pendo looks for when hiring product managers right now, and why (14:58) How Pendo evaluates AI product managers, specifically (19:14) How Todd Olson views AI product management compared to traditional software product management (21:56) Todd’s concerns about the probabilistic nature of AI-generated answers in the product UX (27:51) What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)   Why being able to tell what answers are best will become more important as choice increases (40:05)

Quotes from Today’s Episode

“Let’s go back to classic Geoffrey Moore Crossing the Chasm, you’re selling to early adopters. And what you’re doing is you’re relying on the early adopters’ skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn’t do anything because the market we were selling to was very, very savvy; they’re hungry people, they just like new things. They’re getting data, they’re feeling really, really smart, everything’s working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they’re not early adopters, they’re more technology laggards in some degree, and they don’t understand how to use data to inform their job. They’ve never used data to inform their job. There, we’ve had to do a lot more work.” Todd (2:04 - 2:58) “I think AI is amazing, and I don’t want to say AI is overhyped because AI in general is—yeah, it’s the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don’t. That’s not what I’m seeing. If you have a really curious product manager who’s going all in, I think you’re going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we’re seeing is a blend between design and product, that they’re always adjacent and connected; there’s more sort of overlappiness now.” Todd (22:41 - 23:28) “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it’s a fast check-in, like, a five-minute check-in, a ten-minute check-in, that’s pretty darn sticky. That’s a good metric. So, I like stickiness as a metric because it’s measuring [things like], “Are you thinking about this product a lot?” And if you’re thinking about it a lot, and like, you can’t kind of live without it, you’re going to go to it a lot, even if it’s only a few minutes a day. Social media is like that. Thankfully I’m not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That’s a pretty good metric. It gets part of my process of any products that you’re checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that’s what’s ultimately going to win here.” Todd (39:57)

Links

LinkedIn: https://www.linkedin.com/in/toddaolson/  X: https://x.com/tolson  [email protected] 

As business leaders seek the adoption of AI-enabled capabilities across all aspects of operations, less than a third of D&A leaders express confidence that their organization is ready to meet the challenges associated with meeting AI-driven demand’. This multi-group discussion will cover:
Which components of the D&A value delivery chain are most in need of evolution?
What are the best next-generation D&A organizational & operating models suited for the AI-era?
What are the best KPIs for measuring ‘AI-readiness’ among systems, teams, and leaders?

Summary In this episode of the Data Engineering Podcast we welcome back Nick Schrock, CTO and founder of Dagster Labs, to discuss the evolving landscape of data engineering in the age of AI. As AI begins to impact data platforms and the role of data engineers, Nick shares his insights on how it will ultimately enhance productivity and expand software engineering's scope. He delves into the current state of AI adoption, the importance of maintaining core data engineering principles, and the need for human oversight when leveraging AI tools effectively. Nick also introduces Dagster's new components feature, designed to modularize and standardize data transformation processes, making it easier for teams to collaborate and integrate AI into their workflows. Join in to explore the future of data engineering, the potential for AI to abstract away complexity, and the importance of open standards in preventing walled gardens in the tech industry.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementThis episode is brought to you by Coresignal, your go-to source for high-quality public web data to power best-in-class AI products. Instead of spending time collecting, cleaning, and enriching data in-house, use ready-made multi-source B2B data that can be smoothly integrated into your systems via APIs or as datasets. With over 3 billion data records from 15+ online sources, Coresignal delivers high-quality data on companies, employees, and jobs. It is powering decision-making for more than 700 companies across AI, investment, HR tech, sales tech, and market intelligence industries. A founding member of the Ethical Web Data Collection Initiative, Coresignal stands out not only for its data quality but also for its commitment to responsible data collection practices. Recognized as the top data provider by Datarade for two consecutive years, Coresignal is the go-to partner for those who need fresh, accurate, and ethically sourced B2B data at scale. Discover how Coresignal's data can enhance your AI platforms. Visit dataengineeringpodcast.com/coresignal to start your free 14-day trial. Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th.Your host is Tobias Macey and today I'm interviewing Nick Schrock about lowering the barrier to entry for data platform consumersInterview IntroductionHow did you get involved in the area of data management?Can you start by giving your summary of the impact that the tidal wave of AI has had on data platforms and data teams?For anyone who hasn't heard of Dagster, can you give a quick summary of the project?What are the notable changes in the Dagster project in the past year?What are the ecosystem pressures that have shaped the ways that you think about the features and trajectory of Dagster as a project/product/community?In your recent release you introduced "components", which is a substantial change in how you enable teams to collaborate on data problems. What was the motivating factor in that work and how does it change the ways that organizations engage with their data?tension between being flexible and extensible vs. opinionated and constrainedincreased dependency on orchestration with LLM use casesreducing the barrier to contribution for data platform/pipelinesbringing application engineers into the mixchallenges of meeting users/teams where they are (languages, platform investments, etc.)What are the most interesting, innovative, or unexpected ways that you have seen teams applying the Components pattern?What are the most interesting, unexpected, or challenging lessons that you have learned while working on the latest iterations of Dagster?When is Dagster the wrong choice?What do you have planned for the future of Dagster?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links Dagster+ EpisodeDagster Components Slide DeckThe Rise Of Medium CodeLakehouse ArchitectureIcebergDagster ComponentsPydantic ModelsKubernetesDagster PipesRuby on RailsdbtSlingFivetranTemporalMCP == Model Context ProtocolThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA