Jovita Tam, data and AI advisor with a background in engineering, law, and finance, joined Yuliia and Dumke to challenge how organizations approach governance. Jovita argues that data governance is a way of thinking, not a tool you purchase, explaining why culture eats strategy and why most governance programs fail at the checkbox exercise. Jovita shares her approach to helping executives understand that governance should be an enabler, not an obstacle, and why treating it as purely compliance or cost center misses the point entirely. Jovita's Linkedin - https://www.linkedin.com/in/jovitatam/
talk-data.com
Speaker
Dumky de Wilde
11
talks
Frequent Collaborators
Filter by Event / Source
Talks & appearances
11 activities · Newest first
Ryan Dolley, VP of Product Strategy at GoodData and co-host of Super Data Brothers podcast, joined Yuliia and Dumke to discuss the DBT-Fivetran merger and what it signals about the modern data stack's consolidation phase. After 16 years in BI and analytics, Ryan explains why BI adoption has been stuck at 27% for a decade and why simply adding AI chatbots won't solve it. He argues that at large enterprises, purchasing new software is actually the only viable opportunity to change company culture - not because of the features, but because it forces operational pauses and new ways of working. Ryan shares his take that AI will struggle with BI because LLMs are trained to give emotionally satisfying answers rather than accurate ones. Ryan Dolley linkedin
Thomas in't Veld, founder of Tasman Analytics, joined Yuliia and Dumke to discuss why data projects fail: teams obsess over tooling while ignoring proper data modeling and business alignment. Drawing from building analytics for 70-80 companies, Thomas explains why the best data model never changes unless the business changes, and how his team acts as "data therapists" forcing marketing and sales to agree on fundamental definitions. He shares his controversial take that data modeling sits more in analysis than engineering. Another hot take: analytics engineering is merging back into data engineering, and why showing off your DAG at meetups completely misses the point - business understanding is the critical differentiator, not your technology stack.
Elliot Foreman and Andrew DeLave from ProsperOps joined Yuliia and Dumky to discuss automated cloud cost optimization through commitment management. As Google go-to-market director and senior FinOps specialist, they explain how their platform manages over $4 billion in cloud spend by automating reserved instances, committed use discounts, and savings plans across AWS, Azure, and Google Cloud. The conversation covers the psychology behind commitment hesitation, break-even point mathematics for cloud discounts, workload volatility optimization, and why they avoid AI in favor of deterministic algorithms for financial decisions. They share insights on managing complex multi-cloud environments, the human vs automation debate in FinOps, and practical strategies for reducing cloud costs while mitigating commitment risks.
Kasriel Kay, leading data democratization at Velotix, joined Yuliia and Dumke to challenge conventional wisdom about data governance and catalogs. Kasriel argues that data catalogs provide visibility but fail to deliver business value, comparing them to "buying JIRA and expecting agile practices." He advocates for shifting from restrictive data governance to data enablement through policy-based access control that considers user attributes, data sensitivity, and business context. Kasriel explains how AI-driven policy engines can learn from organizational behavior to automatically grant appropriate data access while maintaining compliance, ultimately reducing time-to-insight and unlocking missed business opportunities.
Patrick Thompson, co-founder of Clarify and former co-founder of Iteratively (acquired by Amplitude), joined Yuliia and Dumky to discuss the evolution from data quality to decision quality. Patrick shares his experience building data contracts solutions at Atlassian and later developing analytics tracking tools. Patrick challenges the assumption that AI will eliminate the need for structured data. He argues that while LLMs excel at understanding unstructured data, businesses still need deterministic systems for automation and decision-making. Patrick shares insights on why enforcing data quality at the source remains critical, even in an AI-first world, and explains his shift from analytics to CRM while maintaining focus on customer data unification and business impact over technical perfectionism.Tune in!
We're seeing the title "Analytics Engineer" continue to rise, and it's in large part due to individuals realizing that there's a name for the type of work they've found themselves doing more and more. In today's landscape, there's truly a need for someone with some Data Engineering chops with an eye towards business use cases. We were fortunate to have the one of the co-authors of The Fundamentals of Analytics Engineering, Dumky de Wilde, join us to discuss the ins and outs of this popular role! Listen in to hear more about the skills and responsibilities of this role, some fun analogies to help explain to your grandma what AE's do, and even tips for individuals in this role for how they can communicate the value and impact of their work to senior leadership! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Master the art and science of analytics engineering with 'Fundamentals of Analytics Engineering.' This book takes you on a comprehensive journey from understanding foundational concepts to implementing end-to-end analytics solutions. You'll gain not just theoretical knowledge but practical expertise in building scalable, robust data platforms to meet organizational needs. What this Book will help me do Design and implement effective data pipelines leveraging modern tools like Airbyte, BigQuery, and dbt. Adopt best practices for data modeling and schema design to enhance system performance and develop clearer data structures. Learn advanced techniques for ensuring data quality, governance, and observability in your data solutions. Master collaborative coding practices, including version control with Git and strategies for maintaining well-documented codebases. Automate and manage data workflows efficiently using CI/CD pipelines and workflow orchestrators. Author(s) Dumky De Wilde, alongside six co-authors-experienced professionals from various facets of the analytics field-delivers a cohesive exploration of analytics engineering. The authors blend their expertise in software development, data analysis, and engineering to offer actionable advice and insights. Their approachable ethos makes complex concepts understandable, promoting educational learning. Who is it for? This book is a perfect fit for data analysts and engineers curious about transitioning into analytics engineering. Aspiring professionals as well as seasoned analytics engineers looking to deepen their understanding of modern practices will find guidance. It's tailored for individuals aiming to boost their career trajectory in data engineering roles, addressing fundamental to advanced topics.
We all know that data, like wine and cheese, becomes more valuable when combined. And, just like wine and cheese, they can lead to serious headaches. Whether you are emailing Excel files around, capturing data from thousands of IoT-devices, or just joining your Google Analytics and sales data, you can benefit from following a structured process to minimize your headaches. After debugging yet another failed pipeline I have distilled my experience of building data ingestion pipelines in 8 simple (though not necessarily easy) steps from setting up triggers to archiving and retention.
As analysts we are well aware of the legal and moral boundaries that come with handling delicate data. But imagine for a moment that behind a desk at your competitor's office was an analyst unbothered by those boundaries. His only goal is to get a competitive advantage by exploiting the weak points in your analytics pipeline. He wants you to waste as much time, money and credibility while you deal with polluted source data, spam, data breaches and legal DDoS attacks.