In this episode, host Jason Foster sits down with Barry Panayi, Chief Data and Insight Officer at John Lewis Partnership to discuss the evolving role of the Chief Data Officer (CDO). Barry shares his journey from coding and analytics to leading data and insights at iconic brands like John Lewis and Waitrose. He offers a unique perspective on how CDOs can transition from technical experts to strategic business leaders. Barry's candid reflections and actionable advice make this episode essential listening for data professionals, aspiring CDOs, and anyone interested in the intersection of data, technology, and business leadership. Don't miss this engaging and insightful conversation! ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023, and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024.
talk-data.com
Topic
Data Management
1097
tagged
Activity Trend
Top Events
Summary In this episode of the Data Engineering Podcast Dan Bruckner, co-founder and CTO of Tamr, talks about the application of machine learning (ML) and artificial intelligence (AI) in master data management (MDM). Dan shares his journey from working at CERN to becoming a data expert and discusses the challenges of reconciling large-scale organizational data. He explains how data silos arise from independent teams and highlights the importance of combining traditional techniques with modern AI to address the nuances of data reconciliation. Dan emphasizes the transformative potential of large language models (LLMs) in creating more natural user experiences, improving trust in AI-driven data solutions, and simplifying complex data management processes. He also discusses the balance between using AI for complex data problems and the necessity of human oversight to ensure accuracy and trust.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world; like in their episode “The Secret Sauce Behind McDonald’s Data Strategy”, which digs into how AI-driven tools can be used to support crew efficiency and customer interactions. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts.Your host is Tobias Macey and today I'm interviewing Dan Bruckner about the application of ML and AI techniques to the challenge of reconciling data at the scale of businessInterview IntroductionHow did you get involved in the area of data management?Can you start by giving an overview of the different ways that organizational data becomes unwieldy and needs to be consolidated and reconciled?How does that reconciliation relate to the practice of "master data management"What are the scaling challenges with the current set of practices for reconciling data?ML has been applied to data cleaning for a long time in the form of entity resolution, etc. How has the landscape evolved or matured in recent years?What (if any) transformative capabilities do LLMs introduce?What are the missing pieces/improvements that are necessary to make current AI systems usable out-of-the-box for data cleaning?What are the strategic decisions that need to be addressed when implementing ML/AI techniques in the data cleaning/reconciliation process?What are the risks involved in bringing ML to bear on data cleaning for inexperienced teams?What are the most interesting, innovative, or unexpected ways that you have seen ML techniques used in data resolution?What are the most interesting, unexpected, or challenging lessons that you have learned while working on using ML/AI in master data management?When is ML/AI the wrong choice for data cleaning/reconciliation?What are your hopes/predictions for the future of ML/AI applications in MDM and data cleaning?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links TamrMaster Data ManagementCERNLHCMichael StonebrakerConway's LawExpert SystemsInformation RetrievalActive LearningThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Summary In this episode of the Data Engineering Podcast Lior Barak shares his insights on developing a three-year strategic vision for data management. He discusses the importance of having a strategic plan for data, highlighting the need for data teams to focus on impact rather than just enablement. He introduces the concept of a "data vision board" and explains how it can help organizations outline their strategic vision by considering three key forces: regulation, stakeholders, and organizational goals. Lior emphasizes the importance of balancing short-term pressures with long-term strategic goals, quantifying the cost of data issues to prioritize effectively, and maintaining the strategic vision as a living document through regular reviews. He encourages data teams to shift from being enablers to impact creators and provides practical advice on implementing a data vision board, setting clear KPIs, and embracing a product mindset to create tangible business impacts through strategic data management.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementIt’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. Your host is Tobias Macey and today I'm interviewing Lior Barak about how to develop your three year strategic vision for dataInterview IntroductionHow did you get involved in the area of data management?Can you start by giving an outline of the types of problems that occur as a result of not developing a strategic plan for an organization's data systems?What is the format that you recommend for capturing that strategic vision?What are the types of decisions and details that you believe should be included in a vision statement?Why is a 3 year horizon beneficial? What does that scale of time encourage/discourage in the debate and decision-making process?Who are the personas that should be included in the process of developing this strategy document?Can you walk us through the steps and processes involved in developing the data vision board for an organization?What are the time-frames or milestones that should lead to revisiting and revising the strategic objectives?What are the most interesting, innovative, or unexpected ways that you have seen a data vision strategy used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data strategy development?When is a data vision board the wrong choice?What are some additional resources or practices that you recommend teams invest in as a supplement to this strategic vision exercise?Contact Info LinkedInSubstackParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links Vision Board OverviewEpisode 397: Defining A Strategy For Your Data ProductsMinto Pyramid PrincipleKPI == Key Performance IndicatorOKR == Objectives and Key ResultsPhil Jackson: Eleven Rings (affiliate link)The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
In this episode, host Jason Foster is joined by Susie Moan, Chief Data Officer at Currys and member of Cynozure's Advisory Board. They explore some of the highlights of Susie's accomplished career in data and AI, highlighting the significance of data value in a commercial business context.
The conversation also covers how Currys successfully balances growth with cost-saving initiatives, and emphasises the importances of aligning AI initiatives with an organisation's overarching business strategy.
*****
Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023, and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024.
Summary The core task of data engineering is managing the flows of data through an organization. In order to ensure those flows are executing on schedule and without error is the role of the data orchestrator. Which orchestration engine you choose impacts the ways that you architect the rest of your data platform. In this episode Hugo Lu shares his thoughts as the founder of an orchestration company on how to think about data orchestration and data platform design as we navigate the current era of data engineering.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementIt’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world, from big picture questions like AI governance and data sharing to more nuanced questions like, how do we balance offense and defense in data management? In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts.Your host is Tobias Macey and today I'm interviewing Hugo Lu about the data platform and orchestration ecosystem and how to navigate the available optionsInterview IntroductionHow did you get involved in building data platforms?Can you describe what an orchestrator is in the context of data platforms?There are many other contexts in which orchestration is necessary. What are some examples of how orchestrators have adapted (or failed to adapt) to the times?What are the core features that are necessary for an orchestrator to have when dealing with data-oriented workflows?Beyond the bare necessities, what are some of the other features and design considerations that go into building a first-class dat platform or orchestration system?There have been several generations of orchestration engines over the past several years. How would you characterize the different coarse groupings of orchestration engines across those generational boundaries?How do the characteristics of a data orchestrator influence the overarching architecture of an organization's data platform/data operations?What about the reverse?How have the cycles of ML and AI workflow requirements impacted the design requirements for data orchestrators?What are the most interesting, innovative, or unexpected ways that you have seen data orchestrators used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data orchestration?When is an orchestrator the wrong choice?What are your predictions and/or hopes for the future of data orchestration?Contact Info MediumLinkedInParting Question From your perspective, what is the biggest thing data teams are missing in the technology today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links OrchestraPrevious Episode: Overview Of The State Of Data OrchestrationCronArgoCDDAGKubernetesData MeshAirflowSSIS == SQL Server Integration ServicesPentahoKettleDataVoloNiFiPodcast EpisodeDagstergRPCCoalescePodcast EpisodedbtDataHubPalantirThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Discover cutting-edge data governance strategies with AWS in this exciting customer panel. Learn how leading organizations transform their data management practices to accelerate data-driven decisions while ensuring security and compliance. Hear firsthand from AWS customers about their innovative approaches to automating data integration and quality, curating data to prevent proliferation, and using centralized catalogs to boost data literacy. Explore how these pioneers are tackling emerging trends like generative AI and applying precise permissions for confident data sharing. Don’t miss this chance to gain actionable insights and see real-world examples of successful data governance in action.
Learn more: AWS re:Invent: https://go.aws/reinvent. More AWS events: https://go.aws/3kss9CP
Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4
About AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
AWSreInvent #AWSreInvent2024
Summary In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelinesInterview IntroductionHow did you get involved in the area of data management?Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context?What are some ways that you think about what constitutes a "component" in this context?The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners?What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering?How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse?In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams?When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic?Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements?low/no code systems for code reuseimpact of LLMs on reusability/compositionimpact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.)polymorphic data models (e.g. activity schema)What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities?What are your hopes and predictions for sharing of code and logic in the future of data engineering?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links Max's Blog PostAirflowSupersetTableauLookerPowerBICohort AnalysisNextJSAirbytePodcast EpisodeFivetranPodcast EpisodeSegmentdbtSQLMeshPodcast EpisodeSparkLAMP StackPHPRelational AlgebraKnowledge GraphPython MarshmallowData Warehouse Lifecycle Toolkit (affiliate link)Entity Centric Data Modeling Blog PostAmplitudeOSACon presentationol-data-platform Tobias' team's data platform codeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
🌟 Session Overview 🌟
Session Name: Building Knowledge Graphs Speaker: Sumit Pal Session Description: Knowledge graphs are all around us, and we use them every day. Many emerging data management products, such as Data Catalogs/Fabric and MDM products, leverage knowledge graphs as their engines. Building a knowledge graph is not a one-off engineering project. It requires collaboration between functional domain experts, data engineers, data modelers, and key sponsors. It also encompasses technology, strategy, and organizational aspects; focusing solely on technology increases the risk of a knowledge graph's failure.
Knowledge graphs are effective tools for capturing and structuring large amounts of structured, unstructured, and semi-structured data. As such, they are becoming the backbone of various systems, including semantic search engines, recommendation systems, conversational bots, and data fabric.
This session guides data and analytics professionals to demonstrate the value of knowledge graphs and how to build semantic applications.
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
In this episode, host Jason Foster sits down with Osama Khan, Deputy Vice-Chancellor, Academic and Professor in Finance at Aston University, Birmingham. Together, they explore how data is transforming the landscape of higher education, from revolutionising decision-making to navigating the challenges and opportunities brought about by AI. Osama also shares insights into Aston University's pioneering initiatives, including programs designed to equip students with essential skills in AI, sustainability and future-focused competencies that prepare them for the evolving world. ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023, and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024.
Learn how AWS is reimagining data streaming with end-to-end managed and serverless capabilities across core infrastructure, systems operations, data integration, data processing, and data management for customers to modernize their data platforms. Learn about new and recent innovations for collecting, processing, and analyzing streaming data, including improved scalability, high resiliency, lower latency, and native integrations with many AWS and third-party services. Join this session to see how you can use AWS streaming solutions to build scalable, resilient data streaming applications for faster insights and improved decision-making.
Learn more: AWS re:Invent: https://go.aws/reinvent. More AWS events: https://go.aws/3kss9CP
Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4
About AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
AWSreInvent #AWSreInvent2024
Summary In this episode of the Data Engineering Podcast Sam Kleinman talks about the pivotal role of databases in software engineering. Sam shares his journey into the world of data and discusses the complexities of database selection, highlighting the trade-offs between different database architectures and how these choices affect system design, query performance, and the need for ETL processes. He emphasizes the importance of understanding specific requirements to choose the right database engine and warns against over-engineering solutions that can lead to increased complexity. Sam also touches on the tendency of engineers to move logic to the application layer due to skepticism about database longevity and advises teams to leverage database capabilities instead. Finally, he identifies a significant gap in data management tooling: the lack of easy-to-use testing tools for database interactions, highlighting the need for better testing paradigms to ensure reliability and reduce bugs in data-driven applications.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementIt’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. Your host is Tobias Macey and today I'm interviewing Sam Kleinman about database tradeoffs across operating environments and axes of scaleInterview IntroductionHow did you get involved in the area of data management?The database engine you use has a substantial impact on how you architect your overall system. When starting a greenfield project, what do you see as the most important factor to consider when selecting a database?points of friction introduced by database capabilitiesembedded databases (e.g. SQLite, DuckDB, LanceDB), when to use and when do they become a bottlenecksingle-node database engines (e.g. Postgres, MySQL), when are they legitimately a problemdistributed databases (e.g. CockroachDB, PlanetScale, MongoDB)polyglot storage vs. general-purpose/multimodal databasesfederated queries, benefits and limitations ease of integration vs. variability of performance and access control Contact Info LinkedInGitHubParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links MongoDBNeonPodcast EpisodeGlareDBNoSQLS3 Conditional WriteEvent driven architectureCockroachDBCouchbaseCassandraThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Discover how to transform your data architecture with the insights and techniques presented in Managing Data as a Product by Andrea Gioia. In this comprehensive guide, you'll explore how to design, implement, and maintain data-product-centered systems to meet modern demands, achieving scalable and sustainable data management tailored to your organization's needs. What this Book will help me do Understand the principles of data-product-centered architectures and their advantages. Learn to design, develop, and operate data products in production settings. Explore strategies to manage the lifecycle of data products efficiently. Gain insights into team topologies and data ownership for distributed systems. Discover data modeling techniques for AI-ready architectures. Author(s) Andrea Gioia is a renowned data architect and the creator of the Open Data Mesh Initiative. With over 20 years of experience, Andrea has successfully led complex data projects and is passionate about sharing his expertise. His writing is practical and driven by real-world challenges, aiming to equip engineers with actionable knowledge. Who is it for? This book is ideal for data engineers, software architects, and engineering leaders involved in shaping innovative data architectures. If you have foundational knowledge of data engineering and are eager to advance your expertise by adopting data-product principles, this book will suit your needs. It is for professionals aiming to modernize and optimize their approach to organizational data management.
The world of SAP is undergoing a major transformation, with many customers either planning or actively modernizing their SAP landscapes as part of the S/4HANA digital transformation. Given the extensive SAP transformation efforts adopted by nearly all SAP customers in recent years and the profound impact these digital changes have had on their business models and IT organizations, the authors decided to write this book. As customers embark on their SAP on AWS journey, they face three main challenges: deciding on the overall strategy, selecting the right business use cases and implementing them effectively. This book aims to address these challenges by guiding readers through the process of identifying and executing the appropriate use cases. It will highlight how customers can harness AWS services beyond merely hosting their SAP systems on AWS, demonstrating the potential of these services to drive innovation. This book covers the entire journey, from defining strategy and identifying business use cases to their implementation, providing practical tips, strategies, and insights. It serves as an essential guide for customers planning to migrate or those who have already migrated their SAP workloads to AWS, helping them explore beyond just the infrastructure aspects of their journey. You Will : Discover how to go beyond just hosting SAP systems on AWS, using the full range of AWS services to innovate and extend your SAP applications. Learn how to identify the right business use cases and implement them effectively, with practical examples and real-world scenarios. Develop the mindset and skills needed to architect modern, cloud-native, event-driven architectures, balancing trade-offs between simplicity, efficiency, and cost. This book is for: Business leaders, IT professionals, and SAP specialists who are looking to modernize their SAP landscapes by leveraging AWS services
Summary In this episode of the Data Engineering Podcast, Anna Geller talks about the integration of code and UI-driven interfaces for data orchestration. Anna defines data orchestration as automating the coordination of workflow nodes that interact with data across various business functions, discussing how it goes beyond ETL and analytics to enable real-time data processing across different internal systems. She explores the challenges of using existing scheduling tools for data-specific workflows, highlighting limitations and anti-patterns, and discusses Kestra's solution, a low-code orchestration platform that combines code-driven flexibility with UI-driven simplicity. Anna delves into Kestra's architectural design, API-first approach, and pluggable infrastructure, and shares insights on balancing UI and code-driven workflows, the challenges of open-core business models, and innovative user applications of Kestra's platform.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to Data Citizens Dialogues on Apple, Spotify, Youtube, or wherever you get your podcasts.Your host is Tobias Macey and today I'm interviewing Anna Geller about incorporating both code and UI driven interfaces for data orchestrationInterview IntroductionHow did you get involved in the area of data management?Can you start by sharing a definition of what constitutes "data orchestration"?There are many orchestration and scheduling systems that exist in other contexts (e.g. CI/CD systems, Kubernetes, etc.). Those are often adapted to data workflows because they already exist in the organizational context. What are the anti-patterns and limitations that approach introduces in data workflows?What are the problems that exist in the opposite direction of using data orchestrators for CI/CD, etc.?Data orchestrators have been around for decades, with many different generations and opinions about how and by whom they are used. What do you see as the main motivation for UI vs. code-driven workflows?What are the benefits of combining code-driven and UI-driven capabilities in a single orchestrator?What constraints does it necessitate to allow for interoperability between those modalities?Data Orchestrators need to integrate with many external systems. How does Kestra approach building integrations and ensure governance for all their underlying configurations?Managing workflows at scale across teams can be challenging in terms of providing structure and visibility of dependencies across workflows and teams. What features does Kestra offer so that all pipelines and teams stay organised?What are
Microsoft Fabric has continued to grow as a platform of choice for building applications for enterprises and ISVs. Learn how large enterprises like LSEG are building data distribution platforms, and how industry leading ISVs are harnessing the power of Fabric’s unified data management and workload dev APIs to accelerate app development. The session includes a high-level demo of Fabric’s Workload Development Kit and monetization guidance via Azure Marketplace.
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Dipti Borkar * Phil Cheetham
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com
BRK200 | English (US) | Data
MSIgnite
Discover the power of GenAI in Toyota’s powertrain development. Learn how Toyota improves information collection, enhances decision-making, and boosts productivity for powertrain engineers with a multi-agent system built with Azure AI Foundry and Azure Cosmos DB. Join us to get a glimpse into the future of agent technology and data management with Azure, and hear from Toyota first-hand how they are using AI to innovate vehicle design.
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Mark Brown * Marco Casalaina * Kosuke Miyasaka * Kenji Onishi
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com
BRK117 | English (US) | AI
MSIgnite
AI has made its way into the boardroom, bringing with it a wave of excitement—and for some, a dose of confusion. For those in the know, AI has become an accelerator for operational and strategic improvements. But for others, the rapid pace and mixed messages have led to uncertainty. In our latest episode, Cynozure's CEO and Founder, Jason Foster explores the current state of AI in leadership spaces and the opportunities it presents for data leaders and professionals. ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023, and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024.
Introducing SQL database in Fabric: Discover the future of data management and unlock new scenarios that drive your business forward. Join us as we showcase the first fully SaaS database experience in Microsoft Fabric. Experience the simplicity of an integrated development environment that empowers you to quickly harness the power of an AI-driven analytics platform. Learn how to access both transactional and analytical data in one place without compromising application performance.
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Panos/Panagiotis Antonopoulos * Anna Hoffman * Asad Khan
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com
BRK196 | English (US) | Data
MSIgnite
Building a Data Mesh The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In Episode 23 of Data Product Management in Action, our host Frannie Helforoush is joined by Soheil Mirchi, a technical product manager. Soheil discusses his company’s shift from a centralized data lake to a decentralized data mesh architecture. He outlines the three types of data products—source-aligned, aggregated, and customer-facing—and highlights the importance of data contracts and testing. Learn about strategies for measuring success through metrics and customer feedback, along with lessons on starting small and fostering data democratization. Tune in for essential insights on effective data management! About our host Frannie Helforoush: Frannie's journey began as a software engineer and evolved into a strategic product manager. Now, as a data product manager, she leverages her expertise in both fields to create impactful solutions. Frannie thrives on making data accessible and actionable, driving product innovation, and ensuring product thinking is integral to data management. Connect with Frannie on LinkedIn. About our guest Soheil Mirchi :Soheil is a Technical Product Manager at Temedica, a health insights company focused on transforming complex healthcare and pharmaceutical data into actionable insights. Leading a team of data engineers, scientists, and analysts, Soheil drives the development of cutting-edge data products while guiding the company’s transition to a data mesh architecture. He is passionate about empowering teams with the autonomy to manage their own data products and believes in a collaborative approach to driving innovation in the health tech space. Connect with Soheil on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else. Join the conversation on LinkedIn. Apply to be a guest or nominate someone that you know. Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!
In this episode, host Jason Foster sits down with Paula Bobbett, Chief Digital Officer at Boots. They discuss her extensive digital experience, which includes roles at Dixon's Carphone, British Airways, Debenhams and Avon. They also explore the importance of an omnichannel strategy that bridges online and in-store customer experiences, using customer insights and AI tools. ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023, and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024.