talk-data.com
Headline Keynote: Data Mesh, Data Products and AI integration
Speakers
Description
It’s now over six years since the emergence of the paper "How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh” by Zhamak Dehghani that had a major impact on the data and analytics industry.
It highlighted major data architecture failures and called for a rethink in data architecture and in data provisioning by creating a data supply chain and democratising data engineering to enable business domain-oriented creation of reusable data products to make data products available as self-governing services.
Since then, we have seen many companies adopt Data Mesh strategies, and the repositioning of some software products as well as the emergence of new ones to emphasize democratisation. But is what has happened since totally addressing the problems that Data Mesh was intending to solve? And what new problems are arising as organizations try to make data safely available to AI projects at machine-scale?
In this unmissable session Big Data LDN Chair Mike Ferguson sits down with Zhamak Dehghani to talk about what has happened since Data Mesh emerged. It will look at:
● The drivers behind Data Mesh
● Revisiting Data Mesh to clarify on what a data product is and what Data Mesh is intending to solve
● Did data architecture really change or are companies still using existing architecture to implement this?
● What about technology to support this - Is Data Fabric the answer or best of breed tools?
● How critical is organisation to successful Data Mesh implementation
● Roadblocks in the way of success e.g., lack of metadata standards
● How does Data Mesh impact AI?
● What’s next on the horizon?