As organizations grapple with data spread across various storage locations, solutions like Coginiti Hybrid Query offer a much-needed alternative to fragmented tools. Published at: https://www.eckerson.com/articles/a-novel-approach-for-reducing-cloud-data-warehouse-expenses-from-coginiti
talk-data.com
Topic
DWH
Data Warehouse
15
tagged
Activity Trend
Top Events
Data management practices have changed substantially since the early 1990s and the dawn of data warehousing. Published at: https://www.eckerson.com/articles/the-continuing-evolution-of-data-management
A zone-based data refinery creates an agile, adaptable data environment that supports new and unanticipated business requirements quickly. It turns a monolithic data warehouse into a flexible data environment that gracefully adapts to new and unanticipated business requirements while maximizing reuse and standards. Published at: https://www.eckerson.com/articles/how-zone-based-data-processing-turns-your-monolithic-data-warehouse-into-a-flexible-modern-data-architecture
Nothing has galvanized the data community more in recent months than two new architectural paradigms for managing enterprise data. On one side there is the data fabric: a centralized architecture that runs a variety of analytic services and applications on top of a layer of universal connectivity. On the other side, is a data mesh: a decentralized architecture that empowers domain owners to manage their own data according to enterprise standards and make it available to peers as they desire.
Most data leaders are still trying to ferret out the implications of both approaches for their own data environments. One of those is Srinivasan Sankar, the enterprise data & analytics leader at Hanover Insurance Group. In this wide-ranging, back-and-forth discussion, Sankar and Eckerson explore the suitability of the data mesh for Hanover, how the Data Fabric might support a Data Mesh, whether a Data Mesh obviates the need for a data warehouse, and practical steps Hanover might to take implement a Data Mesh built on top of a Data Fabric.
Key Takeaways:
- What is the essence of a data mesh?
- How does it relate to the data fabric?
- Does the data mesh require a cultural transformation?
- Does the data mesh obviate the need for a data warehouse?
- How does data architecture as a service fit with the data mesh?
- What is the best way to roll out a data mesh?
- What's the role of a data catalog?
- What is a suitable roadmap for full implementation?
Nothing has galvanized the data community more in recent months than two new architectural paradigms for managing enterprise data. On one side there is the data fabric: a centralized architecture that runs a variety of analytic services and applications on top of a layer of universal connectivity. On the other side, is a data mesh: a decentralized architecture that empowers domain owners to manage their own data according to enterprise standards and make it available to peers as they desire.
Most data leaders are still trying to ferret out the implications of both approaches for their own data environments. One of those is Srinivasan Sankar, the enterprise data & analytics leader at Hanover Insurance Group. In this wide-ranging, back-and-forth discussion, Sankar and Eckerson explore the suitability of the data mesh for Hanover, how the Data Fabric might support a Data Mesh, whether a Data Mesh obviates the need for a data warehouse, and practical steps Hanover might to take implement a Data Mesh built on top of a Data Fabric.
This audio blog is about the data lakehouse and how it is the latest incantation from a handful of data lake providers to usurp the rapidly changing cloud data warehousing market. It is one of three blogs featured in the data lakehouse series.
Originally published at: https://www.eckerson.com/articles/all-hail-the-data-lakehouse-if-built-on-a-modern-data-warehouse
Just-in-time design is the practice of designing working software in small increments that support a business-defined need or story. Just-in-time design, as well as just-in-time testing, is an integral part of the agile software methodology. In fact, you can’t really do agile without just-in-time design.
To help us understand the nuances of just-in-time design, we invited Aaron Fuller, a long-time data architect and member of Eckerson Group’s consulting network. Across an 11-year career as the enterprise data architect for an insurance company, he modeled data, created technical designs for a broad range of systems, established governance and stewardship, and led the establishment of their enterprise data warehousing, business intelligence, and enterprise architecture programs. As principal consultant and owner of Superior Data Strategies since 2010, he leads a team of highly skilled data professionals who are uniquely capable of planning and executing agile data projects.
Data virtualization has been around for decades and has always been controversial. In the 1990s, it was called virtual data warehousing or VDW-- or as some skeptics liked to say, "voodoo and witchcraft”. It’s also been known as query federation and more recently, data services. The idea is that business users don't need to know the location of the data; they merely need to log into the data service and all data appears as if it’s local to their server, modeled in a fashion that makes sense to them.
Andrew Sohn is the Global Head of Data and Analytics at Crawford & Company, a $1B+ service provider to the insurance and risk management industry, where he designed and leads its data and digital transformation strategy and program. With more than 25 years in the industry, Andrew has managed a broad range of infrastructure and application technologies. He’s a strong advocate of data virtualization technology and believes it is an integral part of a modern, agile data ecosystem.
Being a change agent is hard. It's tough to inspire people and get them motivated to work on a shared vision. To understand the mechanics of digitalization and tactics required to implement them, Wayne Eckerson invited Andrea Ballinger so that she could share her hard-won lessons from her illustrious career as a technology leader. Andrea is currently leading a transformation program at LSU, revamping the university’s information technology resources across multiple campuses. Prior to that, she served as Interim CEO and President for the University of Illinois Alumni Association and CTO of Illinois State University. She began her data career at the University of Illinois where she earned a reputation as the foremost data warehousing expert in higher education.
In this episode, Wayne Eckerson and Shakeeb Ahkter dive into DataOps. They discuss what DataOps is, the goals and principles of DataOps, and reasons to adopt a DataOps strategy. Shakeeb also reveals the benefits gained from DataOps and what tools he uses. He is the Director of Enterprise Data Warehouse at Northwestern Medicine and is responsible for direction and oversight of data management, data engineering, and analytics.
In this podcast, Wayne Eckerson and Joe Caserta discuss data migration, compare cloud offerings from Amazon, Google, and Microsoft, and define and explain artificial intelligence.
You can contact Caserta by visiting caserta.com or by sending him an email to [email protected]. Follow him on Twitter @joe_caserta.
Caserta is President of a New York City-based consulting firm he founded in 2001 and a longtime data guy. In 2004, Joe teamed up with data warehousing legend, Ralph Kimball to write to write the book The Data Warehouse ETL Toolkit. Today he’s now one of the leading authorities on big data implementations. This makes Joe one of the few individuals with in-the-trenches experience on both sides of the data divide, traditional data warehousing on relational databases and big data implementations on Hadoop and the cloud.
In this podcast, Wayne Eckerson and James Serra discuss myths of modern data management. Some of the myths discussed include 'all you need is a data lake', 'the data warehouse is dead', 'we don’t need OLAP cubes anymore', 'cloud is too expensive and latency is too slow', 'you should always use a NoSQL product over a RDBMS.'
Serra is big data and data warehousing solutions architect at Microsoft with over thirty years of IT experience. He is a popular blogger and speaker and has presented at dozens of Microsoft PASS and other events. Prior to Microsoft, Serra was an independent data warehousing and business intelligence architect and developer.
In this podcast, Henry Eckerson interviews Dave Wells on the current health and future of the data warehouse. Wells acknowledges that data warehouses are struggling, but argues they are still necessary and cannot be replaced by data lakes. He then explains what the role of the modern data warehouse should be, practical steps forward for evolving the data warehouse, and much more.
Wells is an advisory consultant, educator, and industry analyst dedicated to building meaningful connections throughout the path from data to business value. He works at the intersection of information management and business management, driving business impact through analytics, business intelligence, and active data management. More than forty years of information systems experience combined with over ten years of business management give him a unique perspective about the connections among business, information, data, and technology. Knowledge sharing and skill building are Dave’s passions, carried out through consulting, speaking, teaching, and writing.
He is now the practice director of data management at Eckerson Group, cofounder and director of education at eLearningCurve, and a faculty member at The Data Warehousing Institute.
In this podcast, Wayne Eckerson and Joe Caserta discuss what constitutes a modern data platform. Caserta is President of a New York City-based consulting firm he founded in 2001 and a longtime data guy. In 2004, Joe teamed up with data warehousing legend, Ralph Kimball to write to write the book The Data Warehouse ETL Toolkit. Today he’s now one of the leading authorities on big data implementations. This makes Joe one of the few individuals with in-the-trenches experience on both sides of the data divide, traditional data warehousing on relational databases and big data implementations on Hadoop and the cloud. His perspectives are always insightful.
Dewayne Washington is back this week for part II of his Secrets of Data Analytics Leaders podcast with Eckerson Group. In part I, Dewayne and I discussed the role of the CIO. In this episode we discuss the keys to IT success.
Washington is a senior consultant with 20+ years of experience in BI and Analytics in over two dozen verticals. He is the former BI manager at Dallas/Fortworth International Airport and the current CIO at The Business of Intelligence. He is also the author of the book Get In The Stream, the ultimate guide to customer adoption, and his Data Warehousing and Mobile Solutions implementations have been featured in CIO Magazine and the Wall Street Journal. Washington is also a sought-after speaker and mentor for organizations striving to leverage BI and Analytics to meet business goals, thus earning him the title, BI Pharaoh.