talk-data.com talk-data.com

Topic

Data Modelling

data_governance data_quality metadata_management

355

tagged

Activity Trend

18 peak/qtr
2020-Q1 2026-Q1

Activities

355 activities · Newest first

We talked about:

Rahul’s background What do data engineering managers do and why do we need them? Balancing engineering and management Rahul’s transition into data engineering management The importance of updating your skill set Planning the transition to manager and other challenges Setting expectations for the team and measuring success Data reconciliation GDPR compliance Data modeling for Big Data Advice for people transitioning into data engineering management Staying on top of trends and enabling team members The qualities of a good data engineering team The qualities of a good data engineer candidate (interview advice) The difference between having knowledge and stuffing a CV with buzzwords Advice for students and fresh graduates An overview of an end-to-end data engineering process

Links:

Rahul's LinkedIn: https://www.linkedin.com/in/16rahuljain/

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Summary Building a data platform is a complex journey that requires a significant amount of planning to do well. It requires knowledge of the available technologies, the requirements of the operating environment, and the expectations of the stakeholders. In this episode Tobias Macey, the host of the show, reflects on his plans for building a data platform and what he has learned from running the podcast that is influencing his choices.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription TimescaleDB, from your friends at Timescale, is the leading open-source relational database with support for time-series data. Time-series data is time stamped so you can measure how a system is changing. Time-series data is relentless and requires a database like TimescaleDB with speed and petabyte-scale. Understand the past, monitor the present, and predict the future. That’s Timescale. Visit them today at dataengineeringpodcast.com/timescale RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. I’m your host, Tobias Macey, and today I’m sharing the approach that I’m taking while designing a data platform

Interview

Introduction How did you get involved in the area of data management? What are the components that need to be considered when designing a solution?

Data integration (extract and load)

What are your data sources? Batch or streaming (acceptable latencies)

Data storage (lake or warehouse)

How is the data going to be used? What other tools/systems will need to integrate with it? The warehouse (Bigquery, Snowflake, Redshift) has become the focal point of the "modern data stack"

Data orchestration

Who will be managing the workflow logic?

Metadata repository

Types of metadata (catalog, lineage, access, queries, etc.)

Semantic layer/reporting Data applications

Implementation phases

Build a single end-to-end workflow of a data application using a single category of data across sources Validate the ability for an analyst/data scientist to self-serve a notebook powered analysis Iterate

Risks/unknowns

Data modeling requirements Specific implementation details as integrations acros

Summary Along with globalization of our societies comes the need to analyze the geospatial and geotemporal data that is needed to manage the growth in commerce, communications, and other activities. In order to make geospatial analytics more maintainable and scalable there has been an increase in the number of database engines that provide extensions to their SQL syntax that supports manipulation of spatial data. In this episode Matthew Forrest shares his experiences of working in the domain of geospatial analytics and the application of SQL dialects to his analysis.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription StreamSets DataOps Platform is the world’s first single platform for building smart data pipelines across hybrid and multi-cloud architectures. Build, run, monitor and manage data pipelines confidently with an end-to-end data integration platform that’s built for constant change. Amp up your productivity with an easy-to-navigate interface and 100s of pre-built connectors. And, get pipelines and new hires up and running quickly with powerful, reusable components that work across batch and streaming. Once you’re up and running, your smart data pipelines are resilient to data drift. Those ongoing and unexpected changes in schema, semantics, and infrastructure. Finally, one single pane of glass for operating and monitoring all your data pipelines. The full transparency and control you desire for your data operations. Get started building pipelines in minutes for free at dataengineeringpodcast.com/streamsets. The first 10 listeners of the podcast that subscribe to StreamSets’ Professional Tier, receive 2 months free after their first month. Your host is Tobias Macey and today I’m interviewing Matthew Forrest about doing spatial analysis in SQL

Interview

Introduction How did you get involved in the area of data management? Can you describe what spatial SQL is and some of the use cases that it is relevant for? compatibility with/comparison to syntax from PostGIS What is involved in implementation of spatial logic in database engines mapping geospatial concepts into declarative syntax foundational data types data modeling workflow for analyzing spatial data sets outside of database engines translating from e.g. geopandas to SQL level of support in database engines for spatial data types What are the most interesting, innovative, or unexpected ways that you have seen spatial SQL used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with spatial SQL? When is SQL the wrong choice for spatial analysis? What do you have planned for the future o

All too often, both individuals and organizations hoard the most prized asset in the digital ecosystem: data. This has led to different types of data — such as those tied to ad revenue, subscriptions, content engagement and customer profiles — being kept in silos, to be managed via disparate point solutions. Operating in this way means businesses never get a comprehensive view of their customers, leading to missed opportunities to drive personalized experiences, increase revenues, boost retention and remain privacy-compliant.

Cassandra: The Definitive Guide, (Revised) Third Edition, 3rd Edition

Imagine what you could do if scalability wasn't a problem. With this hands-on guide, you'll learn how the Cassandra database management system handles hundreds of terabytes of data while remaining highly available across multiple data centers. This revised third edition--updated for Cassandra 4.0 and new developments in the Cassandra ecosystem, including deployments in Kubernetes with K8ssandra--provides technical details and practical examples to help you put this database to work in a production environment. Authors Jeff Carpenter and Eben Hewitt demonstrate the advantages of Cassandra's nonrelational design, with special attention to data modeling. Developers, DBAs, and application architects looking to solve a database scaling issue or future-proof an application will learn how to harness Cassandra's speed and flexibility. Understand Cassandra's distributed and decentralized structure Use the Cassandra Query Language (CQL) and cqlsh (the CQL shell) Create a working data model and compare it with an equivalent relational model Design and develop applications using client drivers Explore cluster topology and learn how nodes exchange data Maintain a high level of performance in your cluster Deploy Cassandra onsite, in the cloud, or with Docker and Kubernetes Integrate Cassandra with Spark, Kafka, Elasticsearch, Solr, and Lucene

Extreme DAX

Delve into advanced Data Analysis Expressions (DAX) concepts and Power BI capabilities with Extreme DAX, designed to elevate your skills in Microsoft's Business Intelligence tools. This book guides you through solving intricate business problems, improving your reporting, and leveraging data modeling principles to their fullest potential. What this Book will help me do Master advanced DAX functions and leverage their full potential in data analysis. Develop a solid understanding of context and filtering within Power BI models. Employ strategies for dynamic visualizations and secure data access via row-level security. Apply financial DAX functions for precise investment evaluations and forecasts. Utilize alternative calendars and advanced time-intelligence for comprehensive temporal analyses. Author(s) Michiel Rozema and Henk Vlootman bring decades of deep experience in data analytics and business intelligence to your learning journey. Both authors are seasoned practitioners in using DAX and Microsoft BI tools, with numerous practical deployments of their expertise in business solutions. Their approachable writing reflects their teaching style, ensuring you can easily grasp even challenging concepts. This book combines their comprehensive technical knowledge with real-world, hands-on examples, offering an invaluable resource for refining your skills. Who is it for? This book is perfect for intermediate to advanced analysts who have a foundational knowledge of DAX and Power BI and wish to deepen their expertise. If you are striving to improve performance and accuracy in your reports or aiming to handle advanced modeling scenarios, this book is for you. Prior experience with DAX, Power BI, or equivalent analytical tools is recommended to maximize the benefit. Whether you're a business analyst, data professional, or enthusiast, this book will elevate your analytical capabilities to new heights.

Innovative Data Integration and Conceptual Space Modeling for COVID, Cancer, and Cardiac Care

In recent years, scientific research and translation medicine have placed increased emphasis on computational methodology and data curation across many disciplines, both to advance underlying science and to instantiate precision-medicine protocols in the lab and in clinical practice. The nexus of concerns related to oncology, cardiology, and virology (SARS-CoV-2) presents a fortuitous context within which to examine the theory and practice of biomedical data curation. Innovative Data Integration and Conceptual Space Modeling for COVID, Cancer, and Cardiac Care argues that a well-rounded approach to data modeling should optimally embrace multiple perspectives inasmuch as data-modeling is neither a purely formal nor a purely conceptual discipline, but rather a hybrid of both. On the one hand, data models are designed for use by computer software components, and are, consequently, constrained by the mechanistic demands of software environments; data modeling strategies must accept the formal rigors imposed by unambiguous data-sharing and query-evaluation logic. In particular, data models are not well-suited for software-level deployment if such models do not translate seamlessly to clear strategies for querying data and ensuring data integrity as information is moved across multiple points. On the other hand, data modeling is, likewise, constrained by human conceptual tendencies, because the information which is managed by databases and data networks is ultimately intended to be visualized/utilized by humans as the end-user. Thus, at the intersection of both formal and humanistic methodology, data modeling takes on elements of both logico-mathematical frameworks (e.g., type systems and graph theory) and conceptual/philosophical paradigms (e.g., linguistics and cognitive science). The authors embrace this two-sided aspect of data models by seeking non-reductionistic points of convergence between formal and humanistic/conceptual viewpoints, and by leveraging biomedical contexts (viz., COVID, Cancer, and Cardiac Care) so as to provide motivating examples and case-studies in this volume. Provides an analysis of how conceptual spaces and related cognitive linguistic approaches can inspire programming and query-processing models Outlines the vital role that data modeling/curation has played in significant medical breakthroughs Presents readers with an overview of how information-management approaches intersect with precision medicine, providing case studies of data-modeling in concrete scientific practice Explores applications of image analysis and computer vision in the context of precision medicine Examines the role of technology in scientific publishing, replication studies, and dataset curation

Summary The perennial question of data warehousing is how to model the information that you are storing. This has given rise to methods as varied as star and snowflake schemas, data vault modeling, and wide tables. The challenge with many of those approaches is that they are optimized for answering known questions but brittle and cumbersome when exploring unknowns. In this episode Ahmed Elsamadisi shares his journey to find a more flexible and universal data model in the form of the "activity schema" that is powering the Narrator platform, and how it has allowed his customers to perform self-service exploration of their business domains without being blocked by schema evolution in the data warehouse. This is a fascinating exploration of what can be done when you challenge your assumptions about what is possible.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Your host is Tobias Macey and today I’m interviewing Ahmed Elsamadisi about Narrator, a platform to enable anyone to go from question to data-driven decision in minutes

Interview

Introduction How did you get involved in the area of data management? Can you describe what Narrator is and the story behind it? What are the challenges that you have seen organizations encounter when attempting to make analytics a self-serve capability? What are the use cases that you are focused on? How does Narrator fit within the data workflows of an organization? How is the Narrator platform implemented?

How has the design and focus of the technology evolved since you first started working on Narrator?

The core element of the analyses that you are building is the "activity schema". Can you describe the design process that led you to that format?

What are the challenges that are posed by more widely used modeling techniques such as star/s

Summary Aerospike is a database engine that is designed to provide millisecond response times for queries across terabytes or petabytes. In this episode Chief Strategy Officer, Lenley Hensarling, explains how the ability to process these large volumes of information in real-time allows businesses to unlock entirely new capabilities. He also discusses the technical implementation that allows for such extreme performance and how the data model contributes to the scalability of the system. If you need to deal with massive data, at high velocities, in milliseconds, then Aerospike is definitely worth learning about.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold’s proactive approach to data quality helps data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Your host is Tobias Macey and today I’m interviewing Lenley Hensarling about Aerospike and building real-time data platforms

Interview

Introduction How did you get involved in the area of data management? Can you describe what Aerospike is and the story behind it?

What are the use cases that it is uniquely well suited for? What are the use cases that you and the Aerospike team are focusing on and how does that influence your focus on priorities of feature development and user experience?

What are the driving factors for building a real-time data platform? How is Aerospike being incorporated in application and data architectures? Can you describe how the Aerospike engine is architected?

How have the design and architecture changed or evolved since it was first created? How have market forces influenced the product priorities and focus?

What are the challenges that end users face when determining how to model their data given a key/value storage interface?

What are the abstrac

Microsoft Power BI Cookbook - Second Edition

"Microsoft Power BI Cookbook" is an advanced reference for professionals working with Power BI. Featuring over 90 practical, hands-on recipes, this book allows you to master Power BI for data modeling, creating dashboards, and optimizing queries. You will learn practical tips and techniques, enabling you to create effective and customized Power BI solutions for various business needs. What this Book will help me do Master advanced data cleansing and integration techniques in Power BI's Power Query Editor. Develop intuitive, efficient dashboards and reports using best practices for data visualization. Optimize performance for large datasets using aggregation tables and efficient query techniques. Implement sophisticated analysis and business logic using the power of DAX programming language. Deploy and manage Power BI solutions leveraging integration with Microsoft ecosystem tools. Author(s) Greg Deckler and None Powell are seasoned Power BI experts with extensive backgrounds in business intelligence and data solutions. Greg is a recognized Power BI consultant and author with a focus on delivering impactful BI solutions. None brings their experience in utilizing Power BI for diverse organizational needs. Together, they emphasize hands-on learning and actionable insights in their collaborative writing. Who is it for? This book is aimed at business intelligence professionals who already have a basic understanding of Power BI. Ideal readers are those seeking to deepen their knowledge of advanced features and apply best practices in their projects. Whether you're enhancing your existing Power BI skills or managing complex datasets, this book will provide the techniques and insights to excel in your role.

Data Modeling with SAP BW/4HANA 2.0: Implementing Agile Data Models Using Modern Modeling Concepts

Gain practical guidance for implementing data models on the SAP BW/4HANA platform using modern modeling concepts. You will walk through the various modeling scenarios such as exposing HANA tables and views through BW/4HANA, creating virtual and hybrid data models, and integrating SAP and non-SAP data into a single data model. Data Modeling with SAP BW/4HANA 2.0 gives you the skills you need to use the new SAP BW/HANA features and objects, covers modern modelling concepts, and equips you with the practical knowledge of how to use the best of the HANA and BW/4HANA worlds. What You Will Learn Discover the new modeling features in SAP BW/4HANA Combine SAP HANA and SAP BW/4HANA artifacts Leverage virtualization when designing and building data models Build hybrid data models combining InfoObject, OpenODS, and a field-based approach Integrate SAP and non-SAP data into single model Who This Book Is For BI consultants, architects, developers, and analysts working in the SAP BW/4HANA environment.

Data Engineering on Azure

Build a data platform to the industry-leading standards set by Microsoft’s own infrastructure. In Data Engineering on Azure you will learn how to: Pick the right Azure services for different data scenarios Manage data inventory Implement production quality data modeling, analytics, and machine learning workloads Handle data governance Using DevOps to increase reliability Ingesting, storing, and distributing data Apply best practices for compliance and access control Data Engineering on Azure reveals the data management patterns and techniques that support Microsoft’s own massive data infrastructure. Author Vlad Riscutia, a data engineer at Microsoft, teaches you to bring an engineering rigor to your data platform and ensure that your data prototypes function just as well under the pressures of production. You'll implement common data modeling patterns, stand up cloud-native data platforms on Azure, and get to grips with DevOps for both analytics and machine learning. About the Technology Build secure, stable data platforms that can scale to loads of any size. When a project moves from the lab into production, you need confidence that it can stand up to real-world challenges. This book teaches you to design and implement cloud-based data infrastructure that you can easily monitor, scale, and modify. About the Book In Data Engineering on Azure you’ll learn the skills you need to build and maintain big data platforms in massive enterprises. This invaluable guide includes clear, practical guidance for setting up infrastructure, orchestration, workloads, and governance. As you go, you’ll set up efficient machine learning pipelines, and then master time-saving automation and DevOps solutions. The Azure-based examples are easy to reproduce on other cloud platforms. What's Inside Data inventory and data governance Assure data quality, compliance, and distribution Build automated pipelines to increase reliability Ingest, store, and distribute data Production-quality data modeling, analytics, and machine learning About the Reader For data engineers familiar with cloud computing and DevOps. About the Author Vlad Riscutia is a software architect at Microsoft. Quotes A definitive and complete guide on data engineering, with clear and easy-to-reproduce examples. - Kelum Prabath Senanayake, Echoworx An all-in-one Azure book, covering all a solutions architect or engineer needs to think about. - Albert Nogués, Danone A meaningful journey through the Azure ecosystem. You’ll be building pipelines and joining components quickly! - Todd Cook, Appen A gateway into the world of Azure for machine learning and DevOps engineers. - Krzysztof Kamyczek, Luxoft

Data Modeling for Azure Data Services

Data Modeling for Azure Data Services is an essential guide that delves into the intricacies of designing, provisioning, and implementing robust data solutions within the Azure ecosystem. Through practical examples and hands-on exercises, this book equips you with the knowledge to create scalable, performant, and adaptable database designs tailored to your business needs. What this Book will help me do Understand and apply normalization, dimensional modeling, and data vault modeling for relational databases. Learn to provision and implement scalable solutions like Azure SQL DB and Azure Synapse SQL Pool. Master how to design and model a Data Lake using Azure Storage efficiently. Gain expertise in NoSQL database modeling and implementing solutions using Azure Cosmos DB. Develop ETL/ELT processes effectively using Azure Data Factory to support data integration workflows. Author(s) None Braake brings a wealth of expertise as a data architect and cloud solutions builder specializing in Azure's data services. With hands-on experience in projects requiring sophisticated data modeling and optimization, None crafts detailed learning material to help professionals level up their database design and Azure deployment skills. Dedicated to explaining complex topics with clarity and approachable language, None ensures that the learners gain not just knowledge but applied competence. Who is it for? This book is a valuable resource for business intelligence developers, data architects, and consultants aiming to refine their skills in data modeling within modern cloud ecosystems, particularly Microsoft Azure. Whether you're a beginner with some foundational cloud data management knowledge or an experienced professional seeking to deepen your Azure data services proficiency, this book caters to your learning needs.

At Snowflake you can imagine we do a lot of data pipelines and tables curating metrics metrics for all parts of the business. These are the lifeline of Snowflake’s business decisions. We also have a lot of source systems that display and make these metrics accessible to end users. So what happens when your data model does not match your system? For example your bookings numbers in salesforce do not match your data model that curates bookings metrics. At snowflake we continued to run into this problem over and over again. Having this problem we set out to build an infrastructure that would allow users to effortlessly sync the results of their data pipelines with any downstream / upstream system. Allowing us to have a central source of truth in our warehouse. This infrastructure was built on snowflake using airflow and allows a user to begin syncing data with a few details such as model and system to update. In this presentation we will show you how using airflow and snowflake we are able to use our data pipelines as the source of truth for all systems involved in the business. With this infrastructure we are able to use snowflake models as a central source of truth for all applications used throughout the company. This ensures that any number synced in this way seen by two users is always the same.

Summary The database is the core of any system because it holds the data that drives your entire experience. We spend countless hours designing the data model, updating engine versions, and tuning performance. But how confident are you that you have configured it to be as performant as possible, given the dozens of parameters and how they interact with each other? Andy Pavlo researches autonomous database systems, and out of that research he created OtterTune to find the optimal set of parameters to use for your specific workload. In this episode he explains how the system works, the challenge of scaling it to work across different database engines, and his hopes for the future of database systems.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial. Your host is Tobias Macey and today I’m interviewing Andy Pavlo about OtterTune, a system to continuously monitor and improve database performance via machine learning

Interview

Introduction How did you get involved in the area of data management? Can you describe what OtterTune is and the story behind it?

How does it relate to your work with NoisePage?

What are the challenges that database administrators, operators, and users run into when working with, configuring, and tuning transactional systems?

What are some of the contributing factors to the sprawling complexity of the configurable parameters for these databases?

Can you describe how OtterTune is implemented?

What are some of the aggregate benefits that OtterTune can gain by running as a centralized service and learning from all of the systems that it connects to? What are some of the assumptions that you made when starting the commercialization of this technology that have been challenged or invalidated as you began working with initial customers? How have the design and goals of the system changed or evolved since you first began working on it?

What is involved in adding support for a new database engine?

How applicable are the OtterTune capabilities to analyti

Expert Data Modeling with Power BI

Expert Data Modeling with Power BI provides a comprehensive guide to creating effective and optimized data models using Microsoft Power BI. This book will teach you everything you need to know, from connecting to data sources to setting up complex models that enable insightful reporting and business analytics. What this Book will help me do Gain expertise in implementing virtual tables and time intelligence functionalities in Power BI's DAX language. Identify and correctly set up Dimension and Fact tables using the Power Query Editor interface. Master advanced data preparation techniques to build efficient Star Schemas for modeling. Apply best practices for preparing and modeling data for real-world business cases. Become proficient in advanced features like aggregations, incremental refresh, and row-level security. Author(s) Soheil Bakhshi is a seasoned Power BI expert and author with years of experience in business intelligence and analytics. His practical knowledge of data modeling and approachable writing style make complex concepts understandable. Soheil's passion for empowering users to harness the full potential of Power BI is evident through his clear guidance and real-world examples. Who is it for? This book is perfect for business intelligence developers, data analysts, and advanced users of Power BI who aim to deepen their understanding of data modeling. It assumes a familiarity with Power BI's basic functions and core concepts like Star Schema. If you're looking to refine your modeling practices and create versatile, dynamic solutions, this resource is for you.

Summary The data warehouse has become the focal point of the modern data platform. With increased usage of data across businesses, and a diversity of locations and environments where data needs to be managed, the warehouse engine needs to be fast and easy to manage. Yellowbrick is a data warehouse platform that was built from the ground up for speed, and can work across clouds and all the way to the edge. In this episode CTO Mark Cusack explains how the engine is architected, the benefits that speed and predictable pricing has for the organization, and how you can simplify your platform by putting the warehouse close to the data, instead of the other way around.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Mark Cusack about Yellowbrick, a data warehouse designed for distributed clouds

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what Yellowbrick is and some of the story behind it? What does the term "distributed cloud" signify and what challenges are associated with it? How would you characterize Yellowbrick’s position in the database/DWH market? How is Yellowbrick architected?

How have the goals and design of the platform changed or evolved over time?

How does Yellowbrick maintain visibility across the different data locations that it is responsible for?

What capabilities does it offer for being able to join across the disparate "clouds"?

What are some data modeling strategies that users should consider when designing their deployment of Yellowbrick? What are some of the capabilities of Yellowbrick that you find most useful or technically interesting? For someone who is adopting Yellowbrick, what is the process for getting it integrated into their data systems? What are the most underutilized, overlooked, or misunderstood features of Yellowbrick? What are the most interesting, innovative, or unexpected ways that you have seen Yellowbrick used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on and with Yellowbrick? When is Yellowbrick the wrong choice? What do you have planned for the future of the product?

Contact Info

LinkedIn @markcusack on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Yellowbrick Teradata Rainstor Distributed Cloud Hybrid Cloud SwimOS

Podcast Episode

K

Exam Ref DA-100 Analyzing Data with Microsoft Power BI

Prepare for Microsoft Exam DA-100 and help demonstrate your real-world mastery of Power BI data analysis and visualization. Designed for experienced data analytics professionals ready to advance their status, Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the Microsoft Certified Associate level. Focus on the expertise measured by these objectives: Prepare the data Model the data Visualize the data Analyze the data Deploy and maintain deliverables This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you are an experienced business intelligence professional or data analyst, or have a similar role Analyzing Data with Microsoft Power BI About the Exam Exam DA-100 focuses on skills and knowledge needed to acquire, profile, clean, transform, and load data; design and develop data models; create measures with DAX; optimize model performance; create reports and dashboards; enrich reports for usability; enhance reports to expose insights; perform advanced analysis; manage datasets, and create and manage workspaces. About Microsoft Certification Passing this exam earns your Microsoft Certified: Data Analyst Associate certification, demonstrating your ability to help businesses maximize the value of data assets by using Microsoft Power BI. As subject matter experts, Data Analysts design and build scalable data models, clean and transform data, and enable advanced analytic capabilities that provide meaningful business value through easy-to-comprehend data visualizations. See full details at: microsoft.com/learn

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Hosted by Al Martin, VP, IBM Expert Services Delivery, Making Data Simple provides the latest thinking on big data, A.I., and the implications for the enterprise from a range of experts.

This week on Making Data Simple, we have Ahmed Elsamadisi. Ahmed started his career at Cornell’s Autonomous Systems Laboratory focusing on human-robot interaction and Bayesian data fusion as well as building algorithms for autonomous cars. He then joined Raytheon to develop tactical AI algorithms for missile defense, four of which are still in use today by the US Military. Eventually he moved on to Raytheon’s Advanced Technology division to focus on building human exoskeletons--like the Iron Man suit but made of rubber because it’s way more energy efficient and not a fictional concept ignoring proper scientific practices-- and algorithms for adaptive decision making. In 2015 Ahmed joined We Work, and over the next two years Ahmed built We Work’s standard data infrastructure and grew its data team from one to forty Data Engineers and Data Analysts. After implementing a single time series table data model at We Work and seeing the immediate results, Ahmed wanted to figure out a way to bring this new found knowledge to the world. Ahmed founded Narrator to allow startups to leverage this new approach, ask questions, understand customer behavior, and analyze data across all their systems from a simple Universal Data Model.  Show Notes 3:26 – Tell us about algorithms for autonomous cars 6:26 – Anything you can say around missile defense? 8:58 – Tell us about human exoskeletons 13:18 – What kind of data were you using to make decisions around the Iron Man suit? 16:02 – What did you learn at We Work? 25:00 – How we answer a question in the world of Narrator 32:24 – Does Narrator sit between the application and the database? 33:28 – Walk us through a Use Case Website: Narrator AI.com Books:  The Power of Bad Never Split The Difference Ahmed Elsamadisi - LinkedIn    Connect with the Team Producer Kate Brown - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Summary A majority of the time spent in data engineering is copying data between systems to make the information available for different purposes. This introduces challenges such as keeping information synchronized, managing schema evolution, building transformations to match the expectations of the destination systems. H.O. Maycotte was faced with these same challenges but at a massive scale, leading him to question if there is a better way. After tasking some of his top engineers to consider the problem in a new light they created the Pilosa engine. In this episode H.O. explains how using Pilosa as the core he built the Molecula platform to eliminate the need to copy data between systems in able to make it accessible for analytical and machine learning purposes. He also discusses the challenges that he faces in helping potential users and customers understand the shift in thinking that this creates, and how the system is architected to make it possible. This is a fascinating conversation about what the future looks like when you revisit your assumptions about how systems are designed.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask. RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Your host is Tobias Macey and today I’m interviewing H.O. Maycotte about Molecula, a cloud based feature store based on the open source Pilosa project

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of what you are building at Molecula and the story behind it?

What are the additional capabilities that Molecula offers on top of the open source Pilosa project?

What are the problems/use cases that Molecula solves for? What are some of the technologies or architectural patterns that Molecula might replace in a companies data platform? One of the use cases that is mentioned on the Molecula site is as a feature store for ML and AI. This is a category that has been seeing a lot of growth recently. Can you provide some context how Molecula fits in that market and how it compares to options such as Tecton, Iguazio, Feast, etc.?

What are the benefits of using a bitmap index for identifying and computing features?

Can you describe how the Molecula platform is architected?

How has the design and goal of Molecula changed or evolved since you first began working on it?

For someone who is using Molecula, can you describe the process of integrating it with their existing data sources? Can you describe the internal data model of Pilosa/Molecula?

How should users think about data modeling and architecture as they are loading information into the platform?

Once a user has data in Pilosa, what are the available mechanisms for performing analyses or feature engineering? What are some of the most underutilized or misunderstood capabilities of Molecula? What are some of the most interesting, unexpected, or innovative ways that you have seen the Molecula platform used? What are the most interesting, unexpected, or challenging lessons that you have learned from building and scaling Molecula? When is Molecula the wrong choice? What do you have planned for the future of the platform and business?

Contact Info

LinkedIn @maycotte on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

Molecula Pilosa

Podcast Episode

The Social Dilemma Feature Store Cassandra Elasticsearch

Podcast Episode

Druid MongoDB SwimOS

Podcast Episode

Kafka Kafka Schema Registry

Podcast Episode

Homomorphic Encryption Lucene Solr

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast