talk-data.com talk-data.com

Topic

Data Modelling

data_governance data_quality metadata_management

355

tagged

Activity Trend

18 peak/qtr
2020-Q1 2026-Q1

Activities

355 activities · Newest first

Building a marketing attribution model with dbt

This video covers what you'll need to build a marketing attribution data model and why it's important to have one to evaluate your business. It covers how the team at Grailed has built data models in an effort to determine where their users come from, where their orders come from, and more. It shares the value these models have unlocked for our marketing team and what it might unlock for yours!

Speaker: Evy Kho, Senior Operations Analyst, Grailed

Kimball in the context of the modern data warehouse: what's worth keeping, and what's not

Dimensional modeling described in the Kimball Toolbook was in its 3rd edition 15 years ago yet is still the latest in data modeling advice. So much is different in cloud warehouses that many of those best practices are now bad practices. In this video Dave Fowler, the founder of Chartio and author of Cloud Data Management goes over what no longer applies, and what does.

How to complete a data modeling project at a $6B company

What does it take to implement a data modeling project when working at a company that's in hypergrowth? In this video, Vincey Au will take us through a recent project to incorporate mobile app data into Canva's sessionization model, with the added complexity of having to make this work on petabytes of data. She'll share some of the lessons learned along the way, and what they'd do differently if they were doing it again.

How to Audit Your Directed Acyclic Graph (DAG) and Create Modular Data Models

In a world where creating new models in as easy as creating new files, and creating links between those models is as easy as typing ref, a directed acyclic graph (DAG) can get pretty unwieldy!

A complex DAG makes it difficult to understand the upstream and downstream dependencies of a particular table.

The goal is to create a modular data model using staging models (base_, stg_) and marts models (int_, dim_, fct_).

In this video, Christine Berger of Fishtown Analytics will teach you how to apply the concepts of layering and modularity to your dbt project, all with a fun kitchen metaphor to keep things fresh!

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

Graph Databases in Action

Relationships in data often look far more like a web than an orderly set of rows and columns. Graph databases shine when it comes to revealing valuable insights within complex, interconnected data such as demographics, financial records, or computer networks. In Graph Databases in Action, experts Dave Bechberger and Josh Perryman illuminate the design and implementation of graph databases in real-world applications. You'll learn how to choose the right database solutions for your tasks, and how to use your new knowledge to build agile, flexible, and high-performing graph-powered applications! About the Technology Isolated data is a thing of the past! Now, data is connected, and graph databases—like Amazon Neptune, Microsoft Cosmos DB, and Neo4j—are the essential tools of this new reality. Graph databases represent relationships naturally, speeding the discovery of insights and driving business value. About the Book Graph Databases in Action introduces you to graph database concepts by comparing them with relational database constructs. You'll learn just enough theory to get started, then progress to hands-on development. Discover use cases involving social networking, recommendation engines, and personalization. What's Inside Graph databases vs. relational databases Systematic graph data modeling Querying and navigating a graph Graph patterns Pitfalls and antipatterns About the Reader For software developers. No experience with graph databases required. About the Authors Dave Bechberger and Josh Perryman have decades of experience building complex data-driven systems and have worked with graph databases since 2014. Quotes A comprehensive overview of graph databases and how to build them using Apache tools. - Richard Vaughan, Purple Monkey Collective A well-written and thorough introduction to the topic of graph databases. - Luis Moux, EMO A great guide in your journey towards graph databases and exploiting the new possibilities for data processing. - Mladen Knežić, CROZ A great introduction to graph databases and how you should approach designing systems that leverage graph databases. - Ron Sher, Intuit

Summary Databases are limited in scope to the information that they directly contain. For analytical use cases you often want to combine data across multiple sources and storage locations. This frequently requires cumbersome and time-consuming data integration. To address this problem Martin Traverso and his colleagues at Facebook built the Presto distributed query engine. In this episode he explains how it is designed to allow for querying and combining data where it resides, the use cases that such an architecture unlocks, and the innovative ways that it is being employed at companies across the world. If you need to work with data in your cloud data lake, your on-premise database, or a collection of flat files, then give this episode a listen and then try out Presto today.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today! Your host is Tobias Macey and today I’m interviewing Martin Traverso about PrestoSQL, a distributed SQL engine that queries data in place

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of what Presto is and its origin story?

What was the motivation for releasing Presto as open source?

For someone who is responsible for architecting their organization’s data platform, what are some of the signals that Presto will be a good fit for them?

What are the primary ways that Presto is being used?

I interviewed your colleague at Starburst, Kamil 2 years ago. How has Presto changed or evolved in that time, both technically and in terms of community and ecosystem growth? What are some of the deployment and scaling considerations that operators of Presto should be aware of? What are the best practices that have been established for working with data through Presto in terms of centralizing in a data lake vs. federating across disparate storage locations? What are the tradeoffs of using Presto on top of a data lake vs a vertically integrated warehouse solution? When designing the layout of a data lake that will be interacted with via Presto, what are some of the data modeling considerations that can improve the odds of success? What are some of the most interesting, unexpected, or innovative ways that you have seen Presto used? What are the most interesting, unexpected, or challenging lessons that you have

Summary Data warehouse technology has been around for decades and has gone through several generational shifts in that time. The current trends in data warehousing are oriented around cloud native architectures that take advantage of dynamic scaling and the separation of compute and storage. Firebolt is taking that a step further with a core focus on speed and interactivity. In this episode CEO and founder Eldad Farkash explains how the Firebolt platform is architected for high throughput, their simple and transparent pricing model to encourage widespread use, and the use cases that it unlocks through interactive query speeds.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today! Your host is Tobias Macey and today I’m interviewing Eldad Farkash about Firebolt, a cloud data warehouse optimized for speed and elasticity on structured and semi-structured data

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what Firebolt is and your motivation for building it? How does Firebolt compare to other data warehouse technologies what unique features does it provide? The lines between a data warehouse and a data lake have been blurring in recent years. Where on that continuum does Firebolt lie? What are the unique use cases that Firebolt allows for? How do the performance characteristics of Firebolt change the ways that an engineer should think about data modeling? What technologies might someone replace with Firebolt? How is Firebolt architected and how has the design evolved since you first began working on it? What are some of the most challenging aspects of building a data warehouse platform that is optimized for speed? How do you ha

Semantic Modeling for Data

What value does semantic data modeling offer? As an information architect or data science professional, let’s say you have an abundance of the right data and the technology to extract business gold—but you still fail. The reason? Bad data semantics. In this practical and comprehensive field guide, author Panos Alexopoulos takes you on an eye-opening journey through semantic data modeling as applied in the real world. You’ll learn how to master this craft to increase the usability and value of your data and applications. You’ll also explore the pitfalls to avoid and dilemmas to overcome for building high-quality and valuable semantic representations of data. Understand the fundamental concepts, phenomena, and processes related to semantic data modeling Examine the quirks and challenges of semantic data modeling and learn how to effectively leverage the available frameworks and tools Avoid mistakes and bad practices that can undermine your efforts to create good data models Learn about model development dilemmas, including representation, expressiveness and content, development, and governance Organize and execute semantic data initiatives in your organization, tackling technical, strategic, and organizational challenges

Pro Power BI Desktop: Self-Service Analytics and Data Visualization for the Power User

Deliver eye-catching and insightful business intelligence with Microsoft Power BI Desktop. This new edition has been updated to cover all the latest features of Microsoft’s continually evolving visualization product. New in this edition is help with storytelling—adapted to PCs, tablets, and smartphones—and the building of a data narrative. You will find coverage of templates and JSON style sheets, data model annotations, and the use of composite data sources. Also provided is an introduction to incorporating Python visuals and the much awaited Decomposition Tree visual. Pro Power BI Desktop shows you how to use source data to produce stunning dashboards and compelling reports that you mold into a data narrative to seize your audience’s attention. Slice and dice the data with remarkable ease and then add metrics and KPIs to project the insights that create your competitive advantage. Convert raw data into clear, accurate, and interactive information with Microsoft’s free self-service BI tool. This book shows you how to choose from a wide range of built-in and third-party visualization types so that your message is always enhanced. You will be able to deliver those results on PCs, tablets, and smartphones, as well as share results via the cloud. The book helps you save time by preparing the underlying data correctly without needing an IT department to prepare it for you. What You Will Learn Deliver attention-grabbing information, turning data into insight Find new insights as you chop and tweak your data as never before Build a data narrative through interactive reports with drill-through and cross-page slicing Mash up data from multiple sources into a cleansed and coherent data model Build interdependent charts, maps, and tables to deliver visually stunninginformation Create dashboards that help in monitoring key performance indicators of your business Adapt delivery to mobile devices such as phones and tablets Who This Book Is For Power users who are ready to step up to the big leagues by going beyond what Microsoft Excel by itself can offer. The book also is for line-of-business managers who are starved for actionable data needed to make decisions about their business. And the book is for BI analysts looking for an easy-to-use tool to analyze data and share results with C-suite colleagues they support.

Machine Learning with SAS Viya

Master machine learning with SAS Viya! Machine learning can feel intimidating for new practitioners. Machine Learning with SAS Viya provides everything you need to know to get started with machine learning in SAS Viya, including decision trees, neural networks, and support vector machines. The analytics life cycle is covered from data preparation and discovery to deployment. Working with open-source code? Machine Learning with SAS Viya has you covered – step-by-step instructions are given on how to use SAS Model Manager tools with open source. SAS Model Studio features are highlighted to show how to carry out machine learning in SAS Viya. Demonstrations, practice tasks, and quizzes are included to help sharpen your skills. In this book, you will learn about: Supervised and unsupervised machine learning Data preparation and dealing with missing and unstructured data Model building and selection Improving and optimizing models Model deployment and monitoring performance

Summary Modern applications frequently require access to real-time data, but building and maintaining the systems that make that possible is a complex and time consuming endeavor. Eventador is a managed platform designed to let you focus on using the data that you collect, without worrying about how to make it reliable. In this episode Eventador Founder and CEO Kenny Gorman describes how the platform is architected, the challenges inherent to managing reliable streams of data, the simplicity offered by a SQL interface, and the interesting projects that his customers have built on top of it. This was an interesting inside look at building a business on top of open source stream processing frameworks and how to reduce the burden on end users.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Your host is Tobias Macey and today I’m interviewing Kenny Gorman about the Eventador streaming SQL platform

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what the Eventador platform is and the story behind it?

How has your experience at ObjectRocket influenced your approach to streaming SQL? How do the capabilities and developer experience of Eventador compare to other streaming SQL engines such as ksqlDB, Pulsar SQL, or Materialize?

What are the main use cases that you are seeing people use for streaming SQL?

How does it fit into an application architecture? What are some of the design changes in the different layers that are necessary to take advantage of the real time capabilities?

Can you describe how the Eventador platform is architected?

How has the system design evolved since you first began working on it? How has the overall landscape of streaming systems changed since you first began working on Eventador? If you were to start over today what would you do differently?

What are some of the most interesting and challenging operational aspects of running your platform? What are some of the ways that you have modified or augmented the SQL dialect that you support?

What is the tipping point for when SQL is insufficient for a given task and a user might want to leverage Flink?

What is the workflow for developing and deploying different SQL jobs?

How do you handle versioning of the queries and integration with the software development lifecycle?

What are some data modeling considerations that users should be aware of?

What are some of the sharp edges or design pitfalls that users should be aware of?

What are some of the most interesting, innovative, or unexpected ways that you have seen your customers use your platform? What are some of the most interesting, unexpected, or challenging lessons that you have learned in the process of building and scaling Eventador? What do you have planned for the future of the platform?

Contact Info

LinkedIn Blog @kennygorman on Twitter kgorman on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit t

Cassandra: The Definitive Guide, 3rd Edition

Imagine what you could do if scalability wasn't a problem. With this hands-on guide, you’ll learn how the Cassandra database management system handles hundreds of terabytes of data while remaining highly available across multiple data centers. This third edition—updated for Cassandra 4.0—provides the technical details and practical examples you need to put this database to work in a production environment. Authors Jeff Carpenter and Eben Hewitt demonstrate the advantages of Cassandra’s nonrelational design, with special attention to data modeling. If you’re a developer, DBA, or application architect looking to solve a database scaling issue or future-proof your application, this guide helps you harness Cassandra’s speed and flexibility. Understand Cassandra’s distributed and decentralized structure Use the Cassandra Query Language (CQL) and cqlsh—the CQL shell Create a working data model and compare it with an equivalent relational model Develop sample applications using client drivers for languages including Java, Python, and Node.js Explore cluster topology and learn how nodes exchange data

Beginning Microsoft Power BI: A Practical Guide to Self-Service Data Analytics

Analyze company data quickly and easily using Microsoft’s powerful data tools. Learn to build scalable and robust data models, clean and combine different data sources effectively, and create compelling and professional visuals. Beginning Power BI is a hands-on, activity-based guide that takes you through the process of analyzing your data using the tools that that encompass the core of Microsoft’s self-service BI offering. Starting with Power Query, you will learn how to get data from a variety of sources, and see just how easy it is to clean and shape the data prior to importing it into a data model. Using Power BI tabular and the Data Analysis Expressions (DAX), you will learn to create robust scalable data models which will serve as the foundation of your data analysis. From there you will enter the world of compelling interactive visualizations to analyze and gain insight into your data. You will wrap up your Power BI journey by learning how to package and share your reports and dashboards with your colleagues. Author Dan Clark takes you through each topic using step-by-step activities and plenty of screen shots to help familiarize you with the tools. This third edition covers the new and evolving features in the Power BI platform and new chapters on data flows and composite models. This book is your hands-on guide to quick, reliable, and valuable data insight. What You Will Learn Simplify data discovery, association, and cleansing Build solid analytical data models Create robust interactive data presentations Combine analytical and geographic data in map-based visualizations Publish and share dashboards and reports Who This Book Is For Business analysts, database administrators, developers, and other professionals looking to better understand and communicate with data

Summary Designing the structure for your data warehouse is a complex and challenging process. As businesses deal with a growing number of sources and types of information that they need to integrate, they need a data modeling strategy that provides them with flexibility and speed. Data Vault is an approach that allows for evolving a data model in place without requiring destructive transformations and massive up front design to answer valuable questions. In this episode Kent Graziano shares his journey with data vault, explains how it allows for an agile approach to data warehousing, and explains the core principles of how to use it. If you’re struggling with unwieldy dimensional models, slow moving projects, or challenges integrating new data sources then listen in on this conversation and then give data vault a try for yourself.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Setting up and managing a data warehouse for your business analytics is a huge task. Integrating real-time data makes it even more challenging, but the insights you obtain can make or break your business growth. You deserve a data warehouse engine that outperforms the demands of your customers and simplifies your operations at a fraction of the time and cost that you might expect. You deserve Clickhouse, the open source analytical database that deploys and scales wherever and whenever you want it to and turns data into actionable insights. And Altinity, the leading software and service provider for Clickhouse, is on a mission to help data engineers and DevOps managers tame their operational analytics. Go to dataengineeringpodcast.com/altinity for a free consultation to find out how they can help you today. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Kent Graziano about data vault modeling and the role that it plays in the current data landscape

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of what data vault modeling is and how it differs from other approaches such as third normal form or the star/snowflake schema?

What is the history of this approach and what limitations of alternate styles of modeling is it attempting to overcome? How did you first encounter this approach to data modeling and what is your motivation for dedicating so much time and energy to promoting it?

What are some of the primary challenges associated with data modeling that contribute to the long lead times for data requests or o

Hands-On Business Intelligence with DAX

Hands-On Business Intelligence with DAX is your comprehensive guide to mastering DAX (Data Analysis Expressions) within the context of Power BI, Excel Power Pivot, and more. You will learn how to craft powerful DAX queries, build efficient data models, and gain actionable insights from your data. What this Book will help me do Gain a comprehensive understanding of DAX query language and its applications. Learn the skills to create and optimize data models using real-world examples. Master the techniques needed to write advanced DAX expressions with confidence. Understand key DAX functions for complex data aggregation and calculations. Develop expertise in using DAX to derive deeper insights from diverse datasets. Author(s) None Horne is an experienced data professional passionate about teaching DAX and data modeling concepts in easy and approachable ways. With years of experience in business intelligence and a knack for breaking down complex topics, None Horne equips you with knowledge you can apply immediately. Who is it for? This book is ideal for business intelligence analysts, data professionals, and Power BI users who wish to enhance their DAX expertise. It is well-suited for readers with a basic understanding of BI concepts looking to expand their skills. Pragmatic learners looking to apply DAX to solve data problems will find this book highly practical. If you are keen to develop more efficient workflows and deeper data insights, this book fits your needs.

Summary Building clean datasets with reliable and reproducible ingestion pipelines is completely useless if it’s not possible to find them and understand their provenance. The solution to discoverability and tracking of data lineage is to incorporate a metadata repository into your data platform. The metadata repository serves as a data catalog and a means of reporting on the health and status of your datasets when it is properly integrated into the rest of your tools. At WeWork they needed a system that would provide visibility into their Airflow pipelines and the outputs produced. In this episode Julien Le Dem and Willy Lulciuc explain how they built Marquez to serve that need, how it is architected, and how it compares to other options that you might be considering. Even if you already have a metadata repository this is worth a listen to learn more about the value that visibility of your data can bring to your organization.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You work hard to make sure that your data is clean, reliable, and reproducible throughout the ingestion pipeline, but what happens when it gets to the data warehouse? Dataform picks up where your ETL jobs leave off, turning raw data into reliable analytics. Their web based transformation tool with built in collaboration features lets your analysts own the full lifecycle of data in your warehouse. Featuring built in version control integration, real-time error checking for their SQL code, data quality tests, scheduling, and a data catalog with annotation capabilities it’s everything you need to keep your data warehouse in order. Sign up for a free trial today at dataengineeringpodcast.com/dataform and email [email protected] with the subject "Data Engineering Podcast" to get a hands-on demo from one of their data experts. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference, the Strata Data conference, and PyCon US. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Willy Lulciuc and Julien Le Dem about Marquez, an open source platform to collect, aggregate, and visualize a data ecosystem’s metadata

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what Marquez is?

What was missing in existing metadata management platforms that necessitated the creation of Marquez?

How do the capabilities of Marquez compare with tools and services that bill themselves as data catalogs?

How does it compare to the Amundsen platform that Lyft recently released?

What are some of the tools or platforms that are currently integrated with Marquez and what additional integrations would you like to see? What are some of the capabilities that are unique to Marquez and how are you using them at WeWork? What are the primary resource types that you support in Marquez?

What are some of the lowest common denominator attributes that are necessary and useful to track in a metadata repository?

Can you explain how Marquez is architected and how the design has evolved since you first began working on it?

Many metadata management systems are simply a service layer on top of a separate data storage engine. What are the benefits of using PostgreSQL as the system of record for Marquez?

What are some of the complexities that arise from relying on a relational engine as opposed to a document store or graph database?

How is the metadata itself stored and managed in Marquez?

How much up-front data modeling is necessary and what types of schema representations are supported?

Can you talk through the overall workflow of someone using Marquez in their environment?

What is involved in registering and updating datasets? How do you define and track the health of a given dataset? What are some of the interesting questions that can be answered from the information stored in Marquez?

What were your assumptions going into this project and how have they been challenged or updated as you began using it for production use cases? For someone who is interested in using Marquez what is involved in deploying and maintaining an installation of it? What have you found to be the most challenging or unanticipated aspects of building and maintaining a metadata repository and data discovery platform? When is Marquez the wrong choice for a metadata repository? What do you have planned for the future of Marquez?

Contact Info

Julien Le Dem

@J_ on Twitter Email julienledem on GitHub

Willy

LinkedIn @wslulciuc on Twitter wslulciuc on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

Marquez

DataEngConf Presentation

WeWork Canary Yahoo Dremio Hadoop Pig Parquet

Podcast Episode

Airflow Apache Atlas Amundsen

Podcast Episode

Uber DataBook LinkedIn DataHub Iceberg Table Format

Podcast Episode

Delta Lake

Podcast Episode

Great Expectations data pipeline unit testing framework

Podcast.init Episode

Redshift SnowflakeDB

Podcast Episode

Apache Kafka Schema Registry

Podcast Episode

Open Tracing Jaeger Zipkin DropWizard Java framework Marquez UI Cayley Graph Database Kubernetes Marquez Helm Chart Marquez Docker Container Dagster

Podcast Episode

Luigi DBT

Podcast Episode

Thrift Protocol Buffers

The intro and outro music is from a href="http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug?utm_source=rss&utm_medium=rss"…

MongoDB Recipes: With Data Modeling and Query Building Strategies

Get the most out of MongoDB using a problem-solution approach. This book starts with recipes on the MongoDB query language, including how to query various data structures stored within documents. These self-contained code examples allow you to solve your MongoDB problems without fuss. MongoDB Recipes describes how to use advanced querying in MongoDB, such as indexing and the aggregation framework. It demonstrates how to use the Compass function, a GUI client interacting with MongoDB, and how to apply data modeling to your MongoDB application. You’ll see recipes on the latest features of MongoDB 4 allowing you to manage data in an efficient manner using MongoDB. What You Will Learn Work with the MongoDB document model Design MongoDB schemas Use the MongoDB query language Harness the aggregation framework Create replica sets and sharding in MongoDB Who This Book Is For Developers and professionals who work with MongoDB.

Summary Data warehouses have gone through many transformations, from standard relational databases on powerful hardware, to column oriented storage engines, to the current generation of cloud-native analytical engines. SnowflakeDB has been leading the charge to take advantage of cloud services that simplify the separation of compute and storage. In this episode Kent Graziano, chief technical evangelist for SnowflakeDB, explains how it is differentiated from other managed platforms and traditional data warehouse engines, the features that allow you to scale your usage dynamically, and how it allows for a shift in your workflow from ETL to ELT. If you are evaluating your options for building or migrating a data platform, then this is definitely worth a listen.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media and the Python Software Foundation. Upcoming events include the Software Architecture Conference in NYC and PyCOn US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Kent Graziano about SnowflakeDB, the cloud-native data warehouse

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what SnowflakeDB is for anyone who isn’t familiar with it?

How does it compare to the other available platforms for data warehousing? How does it differ from traditional data warehouses?

How does the performance and flexibility affect the data modeling requirements?

Snowflake is one of the data stores that is enabling the shift from an ETL to an ELT workflow. What are the features that allow for that approach and what are some of the challenges that it introduces? Can you describe how the platform is architected and some of the ways that it has evolved as it has grown in popularity?

What are some of the current limitations that you are struggling with?

For someone getting started with Snowflake what is involved with loading data into the platform?

What is their workflow for allocating and scaling compute capacity and running anlyses?

One of the interesting features enabled by your architecture is data sharing. What are some of the most interesting or unexpected uses of that capability that you have seen? What are some other features or use cases for Snowflake that are not as well known or publicized which you think users should know about? When is SnowflakeDB the wrong choice? What are some of the plans for the future of SnowflakeDB?

Contact Info

LinkedIn Website @KentGraziano on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

SnowflakeDB

Free Trial Stack Overflow

Data Warehouse Oracle DB MPP == Massively Parallel Processing Shared Nothing Architecture Multi-Cluster Shared Data Architecture Google BigQuery AWS Redshift AWS Redshift Spectrum Presto

Podcast Episode

SnowflakeDB Semi-Structured Data Types Hive ACID == Atomicity, Consistency, Isolation, Durability 3rd Normal Form Data Vault Modeling Dimensional Modeling JSON AVRO Parquet SnowflakeDB Virtual Warehouses CRM == Customer Relationship Management Master Data Management

Podcast Episode

FoundationDB

Podcast Episode

Apache Spark

Podcast Episode

SSIS == SQL Server Integration Services Talend Informatica Fivetran

Podcast Episode

Matillion Apache Kafka Snowpipe Snowflake Data Exchange OLTP == Online Transaction Processing GeoJSON Snowflake Documentation SnowAlert Splunk Data Catalog

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Learn Power BI

Master the art of business intelligence with "Learn Power BI". This beginner-friendly guide introduces you to building interactive business reports and dashboards using Microsoft Power BI. By covering data modeling, visualization, and analysis techniques, you will be equipped with the tools to derive actionable insights and decision-making support from your organization's data. What this Book will help me do Create interactive dashboards using Power BI features to enhance business reporting. Import and transform raw data using Power BI's Query Editor for cleaner analysis. Perform dynamic and complex calculations with DAX to enrich your data insights. Develop storytelling reports that convey impactful business insights effectively. Publish and securely share Power BI reports, enabling collaboration across teams. Author(s) Greg Deckler is an expert in business intelligence and data analysis, with years of experience leveraging Power BI to solve real-world challenges. As a seasoned author, Greg combines technical depth with an approachable teaching style, making intricate concepts accessible. Through his writing, he focuses on empowering readers to unlock the full potential of Power BI for solving business challenges. Who is it for? This book is ideal for IT managers, data analysts, or professionals looking to adopt Power BI for their business intelligence needs. It's accessible to beginners with no prior Power BI experience. Additionally, if you're transitioning from other BI tools and aim to learn how to build interactive dashboards and reports in Power BI, this is the guide for you. Advance your skills and meet your business goals with confidence.

Definitive Guide to DAX, The: Business intelligence for Microsoft Power BI, SQL Server Analysis Services, and Excel, 2nd Edition

Now expanded and updated with modern best practices, this is the most complete guide to Microsoft's DAX language for business intelligence, data modeling, and analytics. Expert Microsoft BI consultants Marco Russo and Alberto Ferrari help you master everything from table functions through advanced code and model optimization. You'll learn exactly what happens under the hood when you run a DAX expression, and use this knowledge to write fast, robust code. This edition focuses on examples you can build and run with the free Power BI Desktop, and helps you make the most of the powerful syntax of variables (VAR) in Power BI, Excel, or Analysis Services. Want to leverage all of DAX's remarkable capabilities? This no-compromise "deep dive" is exactly what you need. Related Content Video: Introduction to Microsoft Power BI (Video), Data Analysis Fundamentals with Excel (Video) Perform powerful data analysis with DAX for Power BI, SQL Server, and Excel · Master core DAX concepts, including calculated columns, measures, and calculation groups · Work efficiently with basic and advanced table functions · Understand evaluation contexts and the CALCULATE and CALCULATETABLE functions · Perform time-based calculations · Use calculation groups and calculation items · Use syntax of variables (VAR) to write more readable, maintainable code · Express diverse and unusual relationships with DAX, including many-to-many relationships and bidirectional filters · Master advanced optimization techniques, and improve performance in aggregations · Optimize data models to achieve better compression · Measure DAX query performance with DAX Studio and learn how to optimize your DAX