talk-data.com talk-data.com

Topic

Data Streaming

realtime event_processing data_flow

6

tagged

Activity Trend

70 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Joe Reis ×

The world of data is being reset by AI, and the infrastructure needs to evolve with it. I sit down with streaming legend Tyler Akidau to discuss how the principles of stream processing are forming the foundation for the next generation of "agentic AI" systems. Tyler, who was an AI cynic until recently, explains why he's now convinced that AI agents will fundamentally change how businesses operate and what problems we need to solve to deploy them safely. Key topics we explore: From Human Analytics to Agentic Systems: How data architectures built for human analysis must be re-imagined for a world with thousands of AI agents operating at machine speed.Auditing Everything: Why managing AI requires a new level of governance where we must record all data an agent touches, not just metadata, to diagnose its complex and opaque behaviorThe End of Windowing's Dominance: Tyler reflects on the influential Dataflow paper he co-authored and explains why he now sees a table-based abstraction as a more powerful and user-friendly model than focusing on windowing.The D&D Alignment of AI: Tyler's brilliant analogy for why enterprises are struggling to adopt AI: we're trying to integrate "chaotic" agents into systems built for "lawful good" employees.A Reset for the Industry: Why the rise of AI feels like the early 2010s of streaming, where the problems are unsolved and everyone is trying to figure out the answers.

Coalesce 2024: Mixed model arts: The convergence of data modeling across apps, analytics, and AI

For decades, siloed data modeling has been the norm: applications, analytics, and machine learning/AI. However, the emergence of AI, streaming data, and “shifting left" are changing data modeling, making siloed data approaches insufficient for the diverse world of data use cases. Today's practitioners must possess an end-to-end understanding of the myriad techniques for modeling data throughout the data lifecycle. This presentation covers "mixed model arts," which advocates converging various data modeling methods and the innovations of new ones.

Speaker: Joe Reis Author Nerd Herd

Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements

For decades, data modeling has been fragmented by use cases: applications, analytics, and machine learning/AI. This leads to data siloing and “throwing data over the wall.”

With the emergence of AI, streaming data, and “shifting left" are changing data modeling, these siloed approaches are insufficient for the diverse world of data use cases. Today's practitioners must possess an end-to-end understanding of the myriad techniques for modeling data throughout the data lifecycle. This presentation covers "mixed model arts," which advocates converging various data modeling methods and the innovations of new ones.

Johnny Graettinger (CTO of Estuary) joins the show to give a clinic on streaming and immutable logs. We cover a lot of ground in this technical deep dive. Enjoy!

Estuary: https://estuary.dev/

Gazette: https://gazette.readthedocs.io/en/latest/

Github (Estuary Flow): https://github.com/estuary/flow

LinkedIn: https://www.linkedin.com/in/johngraettinger/

Is data modeling on life support? I posed this question to LinkedIn earlier this week. It got a fair number of replies, some supportive and others saying I'm full of sh*t. In this 5 minute Friday nerdy rant, I unpack what I mean by data modeling being on life support, and where I think data modeling needs to go given newer practices like streaming and machine learning, which aren't currently discussed in data modeling circles.

LinkedIn post about data modeling on life support: https://www.linkedin.com/posts/josephreis_dataengineering-datamodeling-data-activity-7048722463010013185-OyIy

dataengineering #datamodel #data


If you like this show, give it a 5-star rating on your favorite podcast platform.

Purchase Fundamentals of Data Engineering at your favorite bookseller.

Check out my substack: https://joereis.substack.com/

Summary Data engineering is a large and growing subject, with new technologies, specializations, and "best practices" emerging at an accelerating pace. This podcast does its best to explore this fractal ecosystem, and has been at it for the past 5+ years. In this episode Joe Reis, founder of Ternary Data and co-author of "Fundamentals of Data Engineering", turns the tables and interviews the host, Tobias Macey, about his journey into podcasting, how he runs the show behind the scenes, and the other things that occupy his time.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today we’re flipping the script. Joe Reis of Ternary Data will be interviewing me about my time as the host of this show and my perspectives on the data ecosystem

Interview

Introduction How did you get involved in the area of data management? Now I’ll hand it off to Joe…

Joe’s Notes

You do a lot of podcasts. Why? Podcast.init started in 2015, and your first episode of Data Engineering was published January 14, 2017. Walk us through the start of these podcasts. why not a data science podcast? why DE? You’ve published 306 of shows of the Data Engineering Podcast, plus 370 for the init podcast, then you’ve got a new ML podcast. How have you kept the motivation over the years? What’s the process for the show (finding guests, topics, etc….recording, publishing)? It’s a lot of work. Walk us through this process. You’ve done a ton of shows and have a lot of context with what’s going on in the field of both data engineering and Python. What have been some of the