D&A leaders must develop DataOps as an essential practice to redefine their data management operations. This involves establishing business value before pursuing significant data engineering initiatives, and preventing duplicated efforts undertaken by different teams in managing the common metadata, security and observability of information assets within the data platforms.
talk-data.com
Topic
DataOps
131
tagged
Activity Trend
Top Events
Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
D&A leaders must develop DataOps as an essential practice to redefine their data management operations. This involves establishing business value before pursuing significant data engineering initiatives, and preventing duplicated efforts undertaken by different teams in managing the common metadata, security and observability of information assets within the data platforms.
Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
Discover how organizations are transforming into AI-applied companies. Explore strategies for optimizing, improving, and innovating GCP operations. Learn from a holistic approach to integrating GenAI, ML, and DataOps and the capabilities enabled for GCP strategy. Explore enhancements in serverless tech, automation, and security. Don't miss out on this opportunity to be inspired and informed.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Summary In this episode of the Data Engineering Podcast Pete DeJoy, co-founder and product lead at Astronomer, talks about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3. Pete shares his journey into data engineering, discusses Astronomer's contributions to the Airflow project, and highlights the critical role of Airflow in powering operational data products. He covers the evolution of Airflow, its position in the data ecosystem, and the challenges faced by data engineers, including infrastructure management and observability. The conversation also touches on the upcoming Airflow 3 release, which introduces data awareness, architectural improvements, and multi-language support, and Astronomer's observability suite, Astro Observe, which provides insights and proactive recommendations for Airflow users.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Pete DeJoy about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3Interview IntroductionCan you describe what Astronomer is and the story behind it?How would you characterize the relationship between Airflow and Astronomer?Astronomer just released your State of Airflow 2025 Report yesterday and it is the largest data engineering survey ever with over 5,000 respondents. Can you talk a bit about top level findings in the report?What about the overall growth of the Airflow project over time?How have the focus and features of Astronomer changed since it was last featured on the show in 2017?Astro Observe GA’d in early February, what does the addition of pipeline observability mean for your customers? What are other capabilities similar in scope to observability that Astronomer is looking at adding to the platform?Why is Airflow so critical in providing an elevated Observability–or cataloging, or something simlar - experience in a DataOps platform? What are the notable evolutions in the Airflow project and ecosystem in that time?What are the core improvements that are planned for Airflow 3.0?What are the most interesting, innovative, or unexpected ways that you have seen Astro used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airflow and Astro?What do you have planned for the future of Astro/Astronomer/Airflow?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AstronomerAirflowMaxime BeaucheminMongoDBDatabricksConfluentSparkKafkaDagsterPodcast EpisodePrefectAirflow 3The Rise of the Data Engineer blog postdbtJupyter NotebookZapiercosmos library for dbt in AirflowRuffAirflow Custom OperatorSnowflakeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
In this podcast episode, we talked with Agita Jaunzeme about Career choices, transitions and promotions in and out of tech.
About the Speaker:
Agita has designed a career spanning DevOps/DataOps engineering, management, community building, education, and facilitation. She has worked on projects across corporate, startup, open source, and non-governmental sectors. Following her passion, she founded an NGO focusing on the inclusion of expats and locals in Porto. Embodying the values of innovation, automation, and continuous learning, Agita provides practical insights on promotions, career pivots, and aligning work with passion and purpose.
During this event, discussed their career journey, starting with their transition from art school to programming and later into DevOps, eventually taking on leadership roles. They explored the challenges of burnout and the importance of volunteering, founding an NGO to support inclusion, gender equality, and sustainability. The conversation also covered key topics like mentorship, the differences between data engineering and data science, and the dynamics of managing volunteers versus employees. Additionally, the guest shared insights on community management, developer relations, and the importance of product vision and team collaboration.
0:00 Introduction and Welcome 1:28 Guest Introduction: Agita’s Background and Career Highlights 3:05 Transition to Tech: From Art School to Programming 5:40 Exploring DevOps and Growing into Leadership Roles 7:24 Burnout, Volunteering, and Founding an NGO 11:00 Volunteering and Mentorship Initiatives 14:00 Discovering Programming Skills and Early Career Challenges 15:50 Automating Work Processes and Earning a Promotion 19:00 Transitioning from DevOps to Volunteering and Project Management 24:00 Managing Volunteers vs. Employees and Building Organizational Skills 31:07 Personality traits in engineering vs. data roles 33:14 Differences in focus between data engineers and data scientists 36:24 Transitioning from volunteering to corporate work 37:38 The role and responsibilities of a community manager 39:06 Community management vs. developer relations activities 41:01 Product vision and team collaboration 43:35 Starting an NGO and legal processes 46:13 NGO goals: inclusion, gender equality, and sustainability 49:02 Community meetups and activities 51:57 Living off-grid in a forest and sustainability 55:02 Unemployment party and brainstorming session 59:03 Unemployment party: the process and structure
🔗 CONNECT WITH AGITA JAUNZEME Linkedin - /agita
🔗 CONNECT WITH DataTalksClub Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html Datalike Substack - https://datalike.substack.com/ LinkedIn: / datatalks-club
🌟 Session Overview 🌟
Session Name: Open Source Entity Resolution - Needs and Challenges Speaker: Sonal Goyal Session Description: Real world data contains multiple records belonging to the same customer. These records can be in single or multiple systems and they have variations across fields, which makes it hard to combine them together, especially with growing data volumes. This hurts customer analytics - establishing lifetime value, loyalty programs, or marketing channels is impossible when the base data is not linked. No AI algorithm for segmentation can produce the right results when there are multiple copies of the same customer lurking in the data. No warehouse can live up to its promise if the dimension tables have duplicates.
With a modern data stack and DataOps, we have established patterns for E and L in ELT for building data warehouses, datalakes and deltalakes. However, the T - getting data ready for analytics still needs a lot of effort. Modern tools like dbt are actively and successfully addressing this. What is also needed is a quick and scalable way to resolve entities to build the single source of truth of core business entities post Extraction and pre or post Loading.
This session would cover the problem of Entity Resolution, its practical applications and challenges in building an entity resolution system. It will also cover Zingg - an Open Source Framework for building Entity Resolution systems. (https://github.com/zinggAI/zingg/) 🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
This book, "Building Modern Data Applications Using Databricks Lakehouse," provides a comprehensive guide for data professionals to master the Databricks platform. You'll learn to effectively build, deploy, and monitor robust data pipelines with Databricks' Delta Live Tables, empowering you to manage and optimize cloud-based data operations effortlessly. What this Book will help me do Understand the foundations and concepts of Delta Live Tables and its role in data pipeline development. Learn workflows to process and transform real-time and batch data efficiently using the Databricks lakehouse architecture. Master the implementation of Unity Catalog for governance and secure data access in modern data applications. Deploy and automate data pipeline changes using CI/CD, leveraging tools like Terraform and Databricks Asset Bundles. Gain advanced insights in monitoring data quality and performance, optimizing cloud costs, and managing DataOps tasks effectively. Author(s) Will Girten, the author, is a seasoned Solutions Architect at Databricks with over a decade of experience in data and AI systems. With a deep expertise in modern data architectures, Will is adept at simplifying complex topics and translating them into actionable knowledge. His books emphasize real-time application and offer clear, hands-on examples, making learning engaging and impactful. Who is it for? This book is geared towards data engineers, analysts, and DataOps professionals seeking efficient strategies to implement and maintain robust data pipelines. If you have a basic understanding of Python and Apache Spark and wish to delve deeper into the Databricks platform for streamlining workflows, this book is tailored for you.
Snowflake had a big challenge: How do you enable a team of 1,000 sales engineers and field CTOs to successfully deploy over 100 new data products per week and demonstrate every feature and capability in the Snowflake AI Data Cloud tailored to different customer needs?
In this session, Andrew Helgeson, Manager of Technology Platform Alliances at Snowflake, and Guy Adams, CTO at DataOps.live, will explain how Snowflake builds and deploys hundreds of data products using DataOps.live. Join us for a deep dive into Snowflake's innovative approach to automating complex data product deployment — and to learn how Snowflake Solutions Central revolutionizes solution discovery and deployment to drive customer success.
Big data has moved beyond being just a buzzword; it's now at the heart of modern business strategies. When used effectively and efficiently, data can open up new revenue opportunities, provide deep insights, and even drive social impact. As digital transformation accelerates, data is no longer just a tool—it's woven into the fabric of every part of an organization. Designing and maintaining a tier 1 data platform has become essential to staying ahead of the competition.
Especially with AI-driven applications on the rise, the convergence of DevSecOps and DataOps is becoming increasingly critical. The recent global disruption caused by a security company's mistake was a wake-up call—highlighting just how high the stakes can be. Building and scaling data platforms isn't enough; security and scalability need to be integral to the entire data lifecycle.
Bringing more than a decade of SRE experience to maintaining and managing top enterprise software, we will discuss how to tear down silos and encourage collaboration among development, security, operations, and data teams. By doing so, organizations can achieve unprecedented levels of reliability and security. Integrating DevSecOps with DataOps doesn't just automate and protect data operations—it also safeguards data integrity, privacy, and compliance, even as data environments expand in size and complexity. In today's competitive market, this proactive stance is what will set the leaders apart from the rest.
Main Actionable Takeaways:
• Cultivate a Collaborative Culture
• Prioritize Resilience and Recovery
• Integrate Security Seamlessly into Data Pipeline
Learn how to cut your dev time with DataOps.live. Iterate without a local dev environment, test before sharing & use CI/CD to operationalize
Have you ever wondered how to build trusted data products without writing a single line of code? Do you know how to do that for a Snowflake Native App? Learn how to cut your development loop short with DataOps.live. Iterate on your implementation without ever setting up a local development environment, test it before sharing it with your team members, and finally, use CI/CD to operationalize the result as a data pipeline. Automated tests establish trust with your business stakeholders and catch data and schema drift over time in your scheduled data pipeline.
See how Snowflake’s sales engineering team and field CTOs successfully build and deploy over 100 new data products a week using DataOps.live
Snowflake had a big challenge: How do you enable a team of 1,000 sales engineers and field CTOs to successfully deploy over 100 new data products per week and demonstrate every feature and capability in the Snowflake AI Data Cloud tailored to different customer needs?
In this session, Andrew Helgeson, Manager of Technology Platform Alliances at Snowflake, and Guy Adams, CTO at DataOps.live, will explain how Snowflake builds and deploys hundreds of data products using DataOps.live. Join us for a deep dive into Snowflake's innovative approach to automating complex data product deployment — and to learn how Snowflake Solutions Central revolutionizes solution discovery and deployment to drive customer success.
Learn how top companies combine developer experience with DataOps principles to increase productivity & accelerate data product development.
The developer experience, or DevEx, has become an essential pillar in data product development. It not only facilitates the work of the development team but also drives innovation and efficiency.
During this session we will learn from IDC award winning data platform consultant Paul Rankin how some of Switzerland's top companies are turning their attention to the developer experience in combination with true DataOps principles to increase productivity and accelerate data product development cycles.
0:00
hi everyone Welcome to our event this event is brought to you by data dos club which is a community of people who love
0:06
data and we have weekly events and today one is one of such events and I guess we
0:12
are also a community of people who like to wake up early if you're from the states right Christopher or maybe not so
0:19
much because this is the time we usually have uh uh our events uh for our guests
0:27
and presenters from the states we usually do it in the evening of Berlin time but yes unfortunately it kind of
0:34
slipped my mind but anyways we have a lot of events you can check them in the
0:41
description like there's a link um I don't think there are a lot of them right now on that link but we will be
0:48
adding more and more I think we have like five or six uh interviews scheduled so um keep an eye on that do not forget
0:56
to subscribe to our YouTube channel this way you will get notified about all our future streams that will be as awesome
1:02
as the one today and of course very important do not forget to join our community where you can hang out with
1:09
other data enthusiasts during today's interview you can ask any question there's a pin Link in live chat so click
1:18
on that link ask your question and we will be covering these questions during the interview now I will stop sharing my
1:27
screen and uh there is there's a a message in uh and Christopher is from
1:34
you so we actually have this on YouTube but so they have not seen what you wrote
1:39
but there is a message from to anyone who's watching this right now from Christopher saying hello everyone can I
1:46
call you Chris or you okay I should go I should uh I should look on YouTube then okay yeah but anyways I'll you don't
1:53
need like you we'll need to focus on answering questions and I'll keep an eye
1:58
I'll be keeping an eye on all the question questions so um
2:04
yeah if you're ready we can start I'm ready yeah and you prefer Christopher
2:10
not Chris right Chris is fine Chris is fine it's a bit shorter um
2:18
okay so this week we'll talk about data Ops again maybe it's a tradition that we talk about data Ops every like once per
2:25
year but we actually skipped one year so because we did not have we haven't had
2:31
Chris for some time so today we have a very special guest Christopher Christopher is the co-founder CEO and
2:37
head chef or hat cook at data kitchen with 25 years of experience maybe this
2:43
is outdated uh cuz probably now you have more and maybe you stopped counting I
2:48
don't know but like with tons of years of experience in analytics and software engineering Christopher is known as the
2:55
co-author of the data Ops cookbook and data Ops Manifesto and it's not the
3:00
first time we have Christopher here on the podcast we interviewed him two years ago also about data Ops and this one
3:07
will be about data hops so we'll catch up and see what actually changed in in
3:13
these two years and yeah so welcome to the interview well thank you for having
3:19
me I'm I'm happy to be here and talking all things related to data Ops and why
3:24
why why bother with data Ops and happy to talk about the company or or what's changed
3:30
excited yeah so let's dive in so the questions for today's interview are prepared by Johanna berer as always
3:37
thanks Johanna for your help so before we start with our main topic for today
3:42
data Ops uh let's start with your ground can you tell us about your career Journey so far and also for those who
3:50
have not heard have not listened to the previous podcast maybe you can um talk
3:55
about yourself and also for those who did listen to the previous you can also maybe give a summary of what has changed
4:03
in the last two years so we'll do yeah so um my name is Chris so I guess I'm
4:09
a sort of an engineer so I spent about the first 15 years of my career in
4:15
software sort of working and building some AI systems some non- AI systems uh
4:21
at uh Us's NASA and MIT linol lab and then some startups and then um
4:30
Microsoft and then about 2005 I got I got the data bug uh I think you know my
4:35
kids were small and I thought oh this data thing was easy and I'd be able to go home uh for dinner at 5 and life
4:41
would be fine um because I was a big you started your own company right and uh it didn't work out that way
4:50
and um and what was interesting is is for me it the problem wasn't doing the
4:57
data like I we had smart people who did data science and data engineering the act of creating things it was like the
5:04
systems around the data that were hard um things it was really hard to not have
5:11
errors in production and I would sort of driving to work and I had a Blackberry at the time and I would not look at my
5:18
Blackberry all all morning I had this long drive to work and I'd sit in the parking lot and take a deep breath and
5:24
look at my Blackberry and go uh oh is there going to be any problems today and I'd be and if there wasn't I'd walk and
5:30
very happy um and if there was I'd have to like rce myself um and you know and
5:36
then the second problem is the team I worked for we just couldn't go fast enough the customers were super
5:42
demanding they didn't care they all they always thought things should be faster and we are always behind and so um how
5:50
do you you know how do you live in that world where things are breaking left and right you're terrified of making errors
5:57
um and then second you just can't go fast enough um and it's preh Hadoop era
6:02
right it's like before all this big data Tech yeah before this was we were using
6:08
uh SQL Server um and we actually you know we had smart people so we we we
6:14
built an engine in SQL Server that made SQL Server a column or
6:20
database so we built a column or database inside of SQL Server um so uh
6:26
in order to make certain things fast and and uh yeah it was it was really uh it's not
6:33
bad I mean the principles are the same right before Hadoop it's it's still a database there's still indexes there's
6:38
still queries um things like that we we uh at the time uh you would use olap
6:43
engines we didn't use those but you those reports you know are for models it's it's not that different um you know
6:50
we had a rack of servers instead of the cloud um so yeah and I think so what what I
6:57
took from that was uh it's just hard to run a team of people to do do data and analytics and it's not
7:05
really I I took it from a manager perspective I started to read Deming and
7:11
think about the work that we do as a factory you know and in a factory that produces insight and not automobiles um
7:18
and so how do you run that factory so it produces things that are good of good
7:24
quality and then second since I had come from software I've been very influenced
7:29
by by the devops movement how you automate deployment how you run in an agile way how you
7:35
produce um how you how you change things quickly and how you innovate and so
7:41
those two things of like running you know running a really good solid production line that has very low errors
7:47
um and then second changing that production line at at very very often they're kind of opposite right um and so
7:55
how do you how do you as a manager how do you technically approach that and
8:00
then um 10 years ago when we started data kitchen um we've always been a profitable company and so we started off
8:07
uh with some customers we started building some software and realized that we couldn't work any other way and that
8:13
the way we work wasn't understood by a lot of people so we had to write a book and a Manifesto to kind of share our our
8:21
methods and then so yeah we've been in so we've been in business now about a little over 10
8:28
years oh that's cool and uh like what
8:33
uh so let's talk about dat offs and you mentioned devops and how you were inspired by that and by the way like do
8:41
you remember roughly when devops as I think started to appear like when did people start calling these principles
8:49
and like tools around them as de yeah so agile Manifesto well first of all the I
8:57
mean I had a boss in 1990 at Nasa who had this idea build a
9:03
little test a little learn a lot right that was his Mantra and then which made
9:09
made a lot of sense um and so and then the sort of agile software Manifesto
9:14
came out which is very similar in 2001 and then um the sort of first real
9:22
devops was a guy at Twitter started to do automat automated deployment you know
9:27
push a button and that was like 200 Nish and so the first I think devops
9:33
Meetup was around then so it's it's it's been 15 years I guess 6 like I was
9:39
trying to so I started my career in 2010 so I my first job was a Java
9:44
developer and like I remember for some things like we would just uh SFTP to the
9:52
machine and then put the jar archive there and then like keep our fingers crossed that it doesn't break uh uh like
10:00
it was not really the I wouldn't call it this way right you were deploying you
10:06
had a Dey process I put it yeah
10:11
right was that so that was documented too it was like put the jar on production cross your
10:17
fingers I think there was uh like a page on uh some internal Viki uh yeah that
10:25
describes like with passwords and don't like what you should do yeah that was and and I think what's interesting is
10:33
why that changed right and and we laugh at it now but that was why didn't you
10:38
invest in automating deployment or a whole bunch of automated regression
10:44
tests right that would run because I think in software now that would be rare
10:49
that people wouldn't use C CD they wouldn't have some automated tests you know functional
10:56
regression tests that would be the
Chris Bergh joins me to chat about all things DataOps. We also discuss lean, removing waste from data processes and teams, and much more.
DataKitchen: https://datakitchen.io/
DataOps Manifesto: https://dataopsmanifesto.org/en/
Summary In this episode of the Data Engineering Podcast, host Tobias Macey welcomes back Chris Berg, CEO of DataKitchen, to discuss his ongoing mission to simplify the lives of data engineers. Chris explains the challenges faced by data engineers, such as constant system failures, the need for rapid changes, and high customer demands. Chris delves into the concept of DataOps, its evolution, and the misappropriation of related terms like data mesh and data observability. He emphasizes the importance of focusing on processes and systems rather than just tools to improve data engineering workflows. Chris also introduces DataKitchen's open-source tools, DataOps TestGen and DataOps Observability, designed to automate data quality validation and monitor data journeys in production. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Chris Bergh about his tireless quest to simplify the lives of data engineersInterview IntroductionHow did you get involved in the area of data management?Can you describe what DataKitchen is and the story behind it?You helped to define and popularize "DataOps", which then went through a journey of misappropriation similar to "DevOps", and has since faded in use. What is your view on the realities of "DataOps" today?Out of the popularized wave of "DataOps" tools came subsequent trends in data observability, data reliability engineering, etc. How have those cycles influenced the way that you think about the work that you are doing at DataKitchen?The data ecosystem went through a massive growth period over the past ~7 years, and we are now entering a cycle of consolidation. What are the fundamental shifts that we have gone through as an industry in the management and application of data?What are the challenges that never went away?You recently open sourced the dataops-testgen and dataops-observability tools. What are the outcomes that you are trying to produce with those projects?What are the areas of overlap with existing tools and what are the unique capabilities that you are offering?Can you talk through the technical implementation of your new obserability and quality testing platform?What does the onboarding and integration process look like?Once a team has one or both tools set up, what are the typical points of interaction that they will have over the course of their workday?What are the most interesting, innovative, or unexpected ways that you have seen dataops-observability/testgen used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on promoting DataOps?What do you have planned for the future of your work at DataKitchen?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links DataKitchenPodcast EpisodeNASADataOps ManifestoData Reliability EngineeringData ObservabilitydbtDevOps Enterprise SummitBuilding The Data Warehouse by Bill Inmon (affiliate link)dataops-testgen, dataops-observabilityFree Data Quality and Data Observability CertificationDatabricksDORA MetricsDORA for dataThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA