The data mesh makes business domain experts the owners of their data, which they deliver as a “data product” to analytics teams using a self-service data platform and a federated governance framework. Published at: https://www.eckerson.com/articles/why-enterprises-should-implement-the-data-mesh-with-dataops
talk-data.com
Topic
Analytics
4552
tagged
Activity Trend
Top Events
An operating model for data & analytics is critical for aligning resources across the enterprise and balancing the needs for agility and governance. An effective operating model is critical to data & analytics success and its creation and upkeeping should be the primary focus of a chief data officer. Published at: https://www.eckerson.com/articles/an-operating-model-for-data-analytics
Summary There is a constant tension in business data between growing siloes, and breaking them down. Even when a tool is designed to integrate information as a guard against data isolation, it can easily become a silo of its own, where you have to make a point of using it to seek out information. In order to help distribute critical context about data assets and their status into the locations where work is being done Nicholas Freund co-founded Workstream. In this episode he discusses the challenge of maintaining shared visibility and understanding of data work across the various stakeholders and his efforts to make it a seamless experience.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to
Summary In order to improve efficiency in any business you must first know what is contributing to wasted effort or missed opportunities. When your business operates across multiple locations it becomes even more challenging and important to gain insights into how work is being done. In this episode Tommy Yionoulis shares his experiences working in the service and hospitality industries and how that led him to found OpsAnalitica, a platform for collecting and analyzing metrics on multi location businesses and their operational practices. He discusses the challenges of making data collection purposeful and efficient without distracting employees from their primary duties and how business owners can use the provided analytics to support their staff in their duties.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. You wake up to a Slack message from your CEO, who’s upset because the company’s revenue dashboard is broken. You’re told to fix it before this morning’s board meeting, which is just minutes away. Enter Metaplane, the industry’s only self-serve data observability tool. In just a few clicks, you identify the issue’s root cause, conduct an impact analysis—and save the day. Data leaders at Imperfect Foods, Drift, and Vendr love Metaplane because it helps them catch, investigate, and fix data quality issues before their stakeholders ever notice they exist. Setup takes 30 minutes. You can literally get up and running with Metaplane by the end of this podcast. Sign up for a free-forever plan at dataengineeringpodcast.com/metaplane, or try out their most advanced
Business leaders are changing. Today, it’s not enough to be a strategic thinker and good people leader to be successful in the corporate world. Why? Modern business leaders are customer-centric and understand how to create a personalised customer experience using customer data. Modern business leaders are data-driven and understand how to make decisions based on probabilistic outcomes, not just gut feel. Modern business leaders understand what it takes to develop and deploy artificial intelligence in their organisation. So, how do we educate our future business leaders to be analytics literate, technically capable and able design and use AI effectively and responsibly? I recently spoke to Professor Hind Benbya to answer this question and many more relating to educating our future business leaders. Hind is the Head of the Department of Information Systems & Business Analytics at Deakin University, where she leads the strategic direction of the department as well as academic aspects of teaching, research and industry engagement. In this episode of Leaders of Analytics, you will learn: The critical must-learn skills for students wanting to shape the future of business with data and analyticsThe role of data, analytics and AI in business 10 years from now and how today’s business leaders must prepareHow we bring today’s business leaders and executives up to speed with data and analyticsHow analytics leaders can drive their organisations to become truly data-driven, and much more. Hind on LinkedIn: https://www.linkedin.com/in/hindbenbya/ Hind's research and publications: https://scholar.google.com/citations?user=KNAW0xsAAAAJ&hl=en Deakin's Department of Information Systems & Business Analytics: https://www.deakin.edu.au/business/department-of-information-systems-and-business-analytics
Nick Bunker, Research Director of Indeed, joins the podcast to share his views on the the U.S. labor market, including unemployment, job openings, quits, and remote work. Nick stumps Mark, Cris, and Ryan in the statistics game. Full transcript Follow Nick Bunker on twitter @Nick_Bunker. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
Discover the power of business intelligence through Databricks SQL. This comprehensive guide explores the features and tools of the Databricks Lakehouse Platform, emphasizing how it leverages data lakes and warehouses for scalable analytics. You'll gain hands-on experience with Databricks SQL, enabling you to manage data efficiently and implement cutting-edge analytical solutions. What this Book will help me do Comprehend the core features of Databricks SQL and its role in the Lakehouse architecture. Master the use of Databricks SQL for conducting scalable and efficient data queries. Implement data management techniques, including security and cataloging, with Databricks. Optimize data performance using Delta Lake and Photon technologies with Databricks SQL. Compose advanced SQL scripts for robust data ingestion and analytics workflows. Author(s) Vihag Gupta, acclaimed data engineer and BI expert, brings a wealth of experience in large-scale data analytics to this work. With a career deeply rooted in cutting-edge data warehousing technologies, Vihag combines expertise with an approachable teaching style. This book reflects his commitment to empowering data professionals with tools for next-gen analytics. Who is it for? Ideal for data engineers, business intelligence analysts, and warehouse administrators aiming to enhance their practice with Databricks SQL. This book suits those with fundamental knowledge of SQL and data platforms seeking to adopt Lakehouse methodologies. Whether a novice to Databricks or looking to master advanced features, this guide will support professional growth.
In this episode, Jason talks to Dr. Tiffany Perkins-Munn, the Head of Marketing, Data, and Analytics for JP Morgan Chase. They discuss the role of critical thinking in data and analytics, how to use critical thinking to move from vision to outcome and if critical thinking is a skill that can be learned. Tiffany shares her brilliant experience and Ph.D. expertise, the importance of finding a balance between critical thinking and quick progression, and why being willing to question everything through critical thinking can open up to great new ideas and possibilities.
Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] or for faster response, complete this form and tell us why you should be next.
Abstract Making Data Simple Podcast is hosted by Al Martin, WW VP Account Technical Leader IBM Technology Sales, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Ayal Steinburg VP, WW Data, AI, and Automation Sales Leader Global Markets. Ayal started off in music and then in the late 1990’s shifted to retail where he learned about data and analytics. In the past 20 years Ayal has held various sales rolls during his career. Show Notes 9:18 – Ayal’s history11:50 – Ayal talks about his portfolio 16:16 – Market expansion and reducing costs19:02 – Platform and one product21:50 – Why IBM technologies?24:20 – Why are customers moving data?27:56 – Is “Switzerland” a hard or easy sell?30:52 – What is your biggest challenge right now?IBM Connect with the Team Producer Kate Brown - LinkedIn. Host Al Martin - LinkedIn and Twitter. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
In today’s episode, we’re joined by Joyce Durst. Joyce is the CEO and Co-Founder of Growth Acceleration Partners (GAP), a strategic software delivery partner based in Austin, Texas.
We talk about:
This episode is brought to you by Qrvey
The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.
Qrvey, the modern no-code analytics solution for SaaS companies on AWS.
Doug Holtz-Eakin, President of the American Action Forum, joins the podcast to provide his take on the U.S. economy, inflation, employment, and GDP. The big topic fiscal policy while everyone provides their odds of a recession. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
For the first time on Data Unchained, we have two guests! Doug Laney, Data Strategy and Analytics Innovation at West Monroe and Author of the book, 'Data Juice,' and our returning guest David Flynn, CEO of Hammerspace, join our host, Molly Presley, to discuss how data is the new oil in the marketplace, it's challenges, and how we can automate infrastructures to better manage data as it grows in value.
We are giving away 3 to 4 copies of Doug's book, 'Data Juice,' for a limited time after this episode is published! If you would like to have a copy, please reach out to Doug on Linkedin. Here is the link: https://www.linkedin.com/in/douglaney/
analytics #Data #dataeconomy #dataliteracy #datamonetization
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
In this episode, Jason Foster talks to Stephen Galsworthy, Head of Data at TomTom, a leading provider of mapping and location technology. They discuss the gradual integration of artificial intelligence (AI) into data products to create a better user experience, how TomTom navigated the shift from hardware to software and AI, and the challenges associated with integrating AI with data. Stephen also shares his brilliant journey in data & analytics, his extensive experience leading data science teams since 2011 and how to align a data team depending on the maturity of the business.
In today’s episode, we’re talking to Rick Spencer. Rick is VP of Product at InfluxData, a platform to help developers build time series-based applications quickly and at scale. We talk about Rick’s background, how InfluxData got started, and the kinds of problems it solves today. Rick describes the differences between building a product for developers and one for non-developers. We go on to discuss the difference between a time series database and a regular database, the benefits of a time series database, the idea of data gravity, and the interaction between engineering and product teams. This episode is brought to you by Qrvey The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com. Qrvey, the modern no-code analytics solution for SaaS companies on AWS.
Today I’m sitting down with Jon Cooke, founder and CTO of Dataception, to learn his definition of a data product and his views on generating business value with your data products. In our conversation, Jon explains his philosophy on data products and where design and UX fit in. We also review his conceptual model for data products (which he calls the data product pyramid), and discuss how together, these concepts allow teams to ship working solutions faster that actually produce value.
Highlights/ Skip to:
Jon’s definition of a data product (1:19) Brian explains how UX research and design planning can and should influence data architecture —so that last mile solutions are useful and usable (9:47) The four characteristics of a data product in Jon’s model (16:16) The idea of products having a lifecycle with direct business/customer interaction/feedback (17:15) Understanding Jon’s data product pyramid (19:30) The challenges when customers/users don’t know what they want from data product teams - and who should be doing the work to surface requirements (24:44) Mitigating risk and the importance of having management buy-in when adopting a product-driven approach (33:23) Does the data product pyramid account for UX? (35:02) What needs to change in an org model that produces data products that aren’t delivering good last mile UXs (39:20)
Quotes from Today’s Episode “A data product is something that specifically solves a business problem, a piece of analytics, data use case, a pipeline, datasets, dashboard, that type that solves a business use case, and has a customer, and as a product lifecycle to it.” - Jon (2:15)
“I’m a fan of any definition that includes some type of deployment and use by some human being. That’s the end of the cycle, because the idea of a product is a good that has been made, theoretically, for sale.” - Brian (5:50)
“We don’t build a lot of stuff around cloud anymore. We just don’t build it from scratch. It’s like, you know, we don’t generate our own electricity, we don’t mill our own flour. You know, the cloud—there’s a bunch of composable services, which I basically pull together to build my application, whatever it is. We need to apply that thinking all the way through the stack, fundamentally.” - Jon (13:06)
“It’s not a data science problem, it’s not a business problem, it’s not a technology problem, it’s not a data engineering problem, it’s an everyone problem. And I advocate small, multidisciplinary teams, which have a business value person in it, have an SME, have a data scientist, have a data architect, have a data engineer, as a small pod that goes in and answer those questions.” - Jon (26:28)
“The idea is that you’re actually building the data products, which are the back-end, but you’re actually then also doing UX alongside that, you know? You’re doing it in tandem.” - Jon (37:36)
“Feasibility is one of the legs of the stools. There has to be market need, and your market just may be the sales team, but there needs to be some promise of value there that this person is really responsible for at the end of the day, is this data product going to create value or not?” - Brian (42:35)
“The thing about data products is sometimes you don’t know how feasible it is until you actually look at the data…You’ve got to do what we call data archaeology. You got to go and find the data, you got to brush it off, and you’re looking at and go, ‘Is it complete?’” - Jon (44:02)
Links Referenced: Dataception Data Product Pyramid Email: [email protected] LinkedIn: https://www.linkedin.com/in/jon-cooke-096bb0/
Data Literacy is increasingly becoming a skill that every role needs to have, regardless of whether their role a data-oriented or not. No one knows this better than Jordan Morrow, who is known as the Godfather of Data Literacy.
Jordan is the VP and Head of Data Analytics at Brainstorm, Inc., and is the author of Be Data Literate: The Skills Everyone Needs to Succeed.Jordan has been a fierce advocate for data literacy throughout his career, including helping the United Nations understand and utilize data literacy effectively.
Throughout the episode, we define data literacy, why organizations need data literacy in order to use data properly and drive business impact, how to increase organizational data literacy, and more.
This episode of DataFramed is a part of DataCamp’s Data Literacy Month, where we raise awareness for Data Literacy throughout the month of September through webinars, workshops, and resources featuring thought leaders and subject matter experts that can help you build your data literacy, as well as your organization’s. For more information, visit: https://www.datacamp.com/data-literacy-month/for-teams
Summary The global climate impacts everyone, and the rate of change introduces many questions that businesses need to consider. Getting answers to those questions is challenging, because the climate is a multidimensional and constantly evolving system. Sust Global was created to provide curated data sets for organizations to be able to analyze climate information in the context of their business needs. In this episode Gopal Erinjippurath discusses the data engineering challenges of building and serving those data sets, and how they are distilling complex climate information into consumable facts so you don’t have to be an expert to understand it.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Data stacks are becoming more and more complex. This brings infinite possibilities for data pipelines to break and a host of other issues, severely deteriorating the quality of the data and causing teams to lose trust. Sifflet solves this problem by acting as an overseeing layer to the data stack – observing data and ensuring it’s reliable from ingestion all the way to consumption. Whether the data is in transit or at rest, Sifflet can detect data quality anomalies, assess business impact, identify the root cause, and alert data teams’ on their preferred channels. All thanks to 50+ quality checks, extensive column-level lineage, and 20+ connectors across the Data Stack. In addition, data discovery is made easy through Sifflet’s information-rich data catalog with a powerful search engine and real-time health statuses. Listeners of the podcast will get $2000 to use as platform credits when signing up to use Sifflet. Sifflet also offers a 2-week free trial. Find out more at dataengineeringpodcast.com/sifflet today! The biggest challenge with modern data systems is understanding what data you have, where it is located, and who is using it. Select Star’s data discovery platform solves that out of the box, with an automated catalog that includes lineage from where the data originated, all the way to which dashboards rely on it and who is viewing them every day. Just connect it to your database/data warehouse/data lakehouse/whatever you’re using and let them do the rest. Go to dataengineeringpodcast.com/selectstar today to double the length of your free trial and get a swag package when you convert to a paid plan. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to datae
Two colleagues and regulars on Inside Economics, Marisa DiNatale and Dante DeAntonio, join Mark, Cris, and Ryan to breakdown the August U.S. Employment Report and what it means for the Federal Reserve. Due to a bad wifi connection, Mark is forced to participate all episode by cell phone. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
In today’s episode, we’re talking to Mariano Jurich, Project and Product Manager at Making Sense — a platform for developing game-changing software solutions. We talk about the history of Making Sense and what the company is working on today, how to recognize when a new customer is a good fit, understanding the many types of users that use your product over time, and the importance of focusing on user experience. We go on to discuss the reasons why a company with a strong product-market fit might still struggle to achieve success, how remote work could shape the future of software development, and how the software industry in Latin America specifically looks set to change. Finally, we talk about how the growth of Web3 will impact software development, lead to greater democratization, and drive a more globalized world. This episode is brought to you by Qrvey The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com. Qrvey, the modern no-code analytics solution for SaaS companies on AWS.
Send us a text Part 2: Money Ball yet again! Nancy Hensley, Chief Marketing Officer for Stats Perform, talks sports analytics offerings, what sports use data the best, AND find out which gender is better at sports analytics. I wonder...
Show Notes 00:45 Optivision06:57 Which sport gets data? The NFL? 15:48 Was Covid good or bad for business?19:16 Which gender is better at Sports Analytics??!21:45 Guess who Nancy is related to?Linkedin: https://www.linkedin.com/in/nancyhensley/ Website: https://statsperform.com/
Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.