talk-data.com talk-data.com

M

Speaker

Mark Marinelli

2

talks

Head of Product Tamr

Filter by Event / Source

Talks & appearances

2 activities · Newest first

Search activities →
Getting DataOps Right
book
with Mark Marinelli (Tamr) , Michael Stonebraker (Paradigm4; Advisor to VoltDB; former Adjunct Professor at MIT) , Andy Palmer (Tamr) , Nik Bates-Haus , Liam Cleary

Many large organizations have accumulated dozens of disconnected data sources to serve different lines of business over the years. These applications might be useful to one area of the enterprise, but they’re usually inaccessible to other data consumers in the organization. In this short report, five data industry thought leaders explore DataOps—the automated, process-oriented methodology for making clean, reliable data available to teams throughout your company. Andy Palmer, Michael Stonebraker, Nik Bates-Haus, Liam Cleary, and Mark Marinelli from Tamr use real-world examples to explain how DataOps works. DataOps is as much about changing people’s relationship to data as it is about technology, infrastructure, and process. This report provides an organizational approach to implementing this discipline in your company—including various behavioral, process, and technology changes. Through individual essays, you’ll learn how to: Move toward scalable data unification (Michael Stonebraker) Understand DataOps as a discipline (Nik Bates-Haus) Explore the key principles of a DataOps ecosystem (Andy Palmer) Learn the key components of a DataOps ecosystem (Andy Palmer) Build a DataOps toolkit (Liam Cleary) Build a team and prepare for future trends (Mark Marinelli)

Summary

With the proliferation of data sources to give a more comprehensive view of the information critical to your business it is even more important to have a canonical view of the entities that you care about. Is customer number 342 in your ERP the same as Bob Smith on Twitter? Using master data management to build a data catalog helps you answer these questions reliably and simplify the process of building your business intelligence reports. In this episode the head of product at Tamr, Mark Marinelli, discusses the challenges of building a master data set, why you should have one, and some of the techniques that modern platforms and systems provide for maintaining it.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. You work hard to make sure that your data is reliable and accurate, but can you say the same about the deployment of your machine learning models? The Skafos platform from Metis Machine was built to give your data scientists the end-to-end support that they need throughout the machine learning lifecycle. Skafos maximizes interoperability with your existing tools and platforms, and offers real-time insights and the ability to be up and running with cloud-based production scale infrastructure instantaneously. Request a demo at dataengineeringpodcast.com/metis-machine to learn more about how Metis Machine is operationalizing data science. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Mark Marinelli about data mastering for modern platforms

Interview

Introduction How did you get involved in the area of data management? Can you start by establishing a definition of data mastering that we can work from?

How does the master data set get used within the overall analytical and processing systems of an organization?

What is the traditional workflow for creating a master data set?

What has changed in the current landscape of businesses and technology platforms that makes that approach impractical? What are the steps that an organization can take to evolve toward an agile approach to data mastering?

At what scale of company or project does it makes sense to start building a master data set? What are the limitations of using ML/AI to merge data sets? What are the limitations of a golden master data set in practice?

Are there particular formats of data or types of entities that pose a greater challenge when creating a canonical format for them? Are there specific problem domains that are more likely to benefit from a master data set?

Once a golden master has been established, how are changes to that information handled in practice? (e.g. versioning of the data) What storage mechanisms are typically used for managing a master data set?

Are there particular security, auditing, or access concerns that engineers should be considering when managing their golden master that goes beyond the rest of their data infrastructure? How do you manage latency issues when trying to reference the same entities from multiple disparate systems?

What have you found to be the most common stumbling blocks for a group that is implementing a master data platform?

What suggestions do you have to help prevent such a project from being derailed?

What resources do you recommend for someone looking to learn more about the theoretical and practical aspects of