talk-data.com talk-data.com

Topic

MLOps

machine_learning devops ai

233

tagged

Activity Trend

26 peak/qtr
2020-Q1 2026-Q1

Activities

233 activities · Newest first

We talked about:

Santona's background Focusing on data workflows Upsolver vs DBT ML pipelines vs Data pipelines MLOps vs DataOps Tools used for data pipelines and ML pipelines The “modern data stack” and today's data ecosystem Staging the data and the concept of a “lakehouse” Transforming the data after staging What happens after the modeling phase Human-centric vs Machine-centric pipeline Applying skills learned in academia to ML engineering Crafting user personas based on real stories A framework of curiosity Santona's book and resource recommendations

Links:

LinkedIn: https://www.linkedin.com/in/santona-tuli/ Upsolver website: upsolver.com Why we built a SQL-based solution to unify batch and stream workflows: https://www.upsolver.com/blog/why-we-built-a-sql-based-solution-to-unify-batch-and-stream-workflows

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

We talked about:

Hugo's background Why do tools and the companies that run them have wildly different names Hugo's other projects beside Metaflow Transitioning from educator to DevRel What is DevRel? DevRel vs Marketing How DevRel coordinates with developers How DevRel coordinates with marketers What skills a DevRel needs The challenges that come with being an educator Becoming a good writer: nature vs nurture Hugo's approach to writing and suggestions Establishing a goal for your content Choosing a form of media for your content Is DevRel intercompany or intracompany? The Vanishing Gradients podcast Finding Hugo online

Links:

Hugo Browne's github: http://hugobowne.github.io/ Vanishing Gradients: https://vanishinggradients.fireside.fm/ MLOps and DevOps: Why Data Makes It Differenthttps://www.oreilly.com/radar/mlops-and-devops-why-data-makes-it-different/ Evaluate Metaflow for free, right from your Browser: https://outerbounds.com/sandbox/

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

We talked about;

Antonis' background The pros and cons of working for a startup Useful skills for working at a startup and the Lean way to work How Antonis joined the DataTalks.Club community Suggestions for students joining the MLOps course Antonis contributing to Evidently AI How Antonis started freelancing Getting your first clients on Upwork Pricing your work as a freelancer The process after getting approved by a client Wearing many hats as a freelancer and while working at a startup Other suggestions for getting clients as a freelancer Antonis' thoughts on the Data Engineering course Antonis' resource recommendations

Links:

Lean Startup by Eric Ries: https://theleanstartup.com/ Lean Analytics: https://leananalyticsbook.com/ Designing Machine Learning Systems by Chip Huyen: https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/ Kafka Streaming with python by Khris Jenkins tutorial video: https://youtu.be/jItIQ-UvFI4

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html

We talked about:

Bart's background What is data governance? Data dictionaries and data lineage Data access management How to learn about data governance What skills are needed to do data governance effectively When an organization needs to start thinking about data governance Good data access management processes Data masking and the importance of automating data access DPO and CISO roles How data access management works with a data mesh approach Avoiding the role explosion problem The importance of data governance integration in DataOps Terraform as a stepping stone to data governance How Raito can help an organization with data governance Open-source data governance tools

Links:

LinkedIn: https://www.linkedin.com/in/bartvandekerckhove/ Twitter: https://twitter.com/Bart_H_VDK Github: https://github.com/raito-io Website: https://www.raito.io/ Data Mesh Learning Slack: https://data-mesh-learning.slack.com/join/shared_invite/zt-1qs976pm9-ci7lU8CTmc4QD5y4uKYtAA#/shared-invite/email DataQG Website: https://dataqg.com/ DataQG Slack: https://dataqgcommunitygroup.slack.com/join/shared_invite/zt-12n0333gg-iTZAjbOBeUyAwWr8I~2qfg#/shared-invite/email DMBOK (Data Management Book of Knowledge): https://www.dama.org/cpages/body-of-knowledge DMBOK Wheel describing the data governance activities: https://www.dama.org/cpages/dmbok-2-wheel-images

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

We talked about:

Boyan's background What is data strategy? Due diligence and establishing a common goal Designing a data strategy Impact assessment, portfolio management, and DataOps Data products DataOps, Lean, and Agile Data Strategist vs Data Science Strategist The skills one needs to be a data strategist How does one become a data strategist? Data strategist as a translator Transitioning from a Data Strategist role to a CTO Using ChatGPT as a writing co-pilot Using ChatGPT as a starting point How ChatGPT can help in data strategy Pitching a data strategy to a stakeholder Setting baselines in a data strategy Boyan's book recommendations

Links:

LinkedIn: https://www.linkedin.com/in/angelovboyan/ Twitter: https://twitter.com/thinking_code Github: https://github.com/boyanangelov Website: https://boyanangelov.com/

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html

ML in Production: What Does Production Even Mean | Dagshub

ABOUT THE TALK: While giving a talk to a group of up-and-coming data scientists, a question that surprised Dean Pleban was: "When you say “production”, what exactly do you mean?"

In this talk, Dean defines what production actually means. I’ll present a first-principles, step-by-step approach to thinking about deploying a model to production. He will talk about challenges you might face in each step, and provides further reading if you want to dive deeper into each one.

ABOUT THE SPEAKER: Dean Pleban has a background combining physics and computer science. He’s worked on quantum optics and communication, computer vision, software development and design. He’s currently CEO at DagsHub, where he builds products that enable data scientists to work together and get their models to production, using popular open source tools. He’s also the host of the MLOps Podcast, where he speaks with industry experts about ML in production.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

Why People Started Testing Their Models & Data in CI/CD Pipelines | Deepchecks

ABOUT THE TALK As machine learning models are becoming more common in production, organizations are recognizing the significance of continuous validation, and are integrating automated testing into their CI/CD pipelines to ensure that their models remain relevant and are trustworthy. However, with constantly changing data and black-box logic, testing these models can be a daunting task.

In this talk, we explore the common pitfalls of ML models and best practices for testing them. We demonstrate how to use the deepchecks open source package to validate models and data during the research and CI/CD phases.

ABOUT THE SPEAKER Shir Chorev is the co-founder and CTO of Deepchecks, an MLOps startup for continuous validation of ML models and data. Previously, Shir worked at the Prime Minister’s Office and at Unit 8200, conducting and leading research in various Machine Learning and Cybersecurity related challenges.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

Building an ML Experimentation Platform for Easy Reproducibility | Treeverse

ABOUT THE TALK: Quality ML at scale is only possible when we can reproduce a specific iteration of the ML experiment–and this is where data is key.

In this talk, you will learn how to use a data versioning engine to intuitively and easily version your ML experiments and reproduce any specific iteration of the experiment.

This talk will demo through a live code example: -Creating a basic ML experimentation framework with lakeFS (on Jupyter notebook) -Reproducing ML components from a specific iteration of an experiment Building intuitive, zero-maintenance experiments infrastructure -All with common data engineering stacks & open source tooling.

ABOUT THE SPEAKER: Vino Duraisamy is a developer advocate at lakeFS, an open-source platform that delivers git-like experience to object store based data lakes. She has previously worked at NetApp (on data management applications for NetApp data centers), on data teams of Nike and Apple, where she worked mainly on batch processing workloads as a data engineer, built custom NLP models as an ML engineer and even touched upon MLOps a bit for model deployments.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

Extinguishing the Garbage Fire of ML Testing | Mailchimp

ABOUT THE TALK:
Our traditional testing and CI methods for Data Science are not working, but we can't just give up on providing guardrails.

As engineers, how do you solve ML testing?

In this talk, Emily Curtain discusses: - abstracting, decoupling, and separating concerns - keeping pytest only where it belongs - substituting testing for observability in appropriate places - applying data reliability practices and thereby solving some problems at the source - by honoring Data Scientists' mental models, and ways of working

ABOUT THE SPEAKER: Emily Curtin is a Staff MLOps Engineer at Intuit Mailchimp. She leads a crazy good team focused on helping Data Scientists do higher quality work faster and more intuitively.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

The Fun Sized MLOps Stack from Scratch | Featureform

ABOUT THE TALK: Learn about "fun-sized companies" (SMB's, small startups, etc) and how to build a fully fledged MLOps platform from scratch using the best OSS tools out there in under a day.

This talk covers: -The main problems MLOps tries to solve -The most common tools being used & their drawbacks -OSS projects & tools that have been developed in the past 2-3 years and how do they solve some of the pain points of the prior tools -The realistic roadmap for companies that are forever “not-Google” scale but want to continue improving their data and ML maturity

ABOUT THE SPEAKER: Mikiko Bazeley is Head of MLOps at Featureform, a Virtual Feature Store. He has previously taken on engineer, data scientist, and data analyst roles for companies including Mailchimp (Intuit), Teladoc, Sunrun, Autodesk along with a handful of early stage startups.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai

Streaming Data Mesh

Data lakes and warehouses have become increasingly fragile, costly, and difficult to maintain as data gets bigger and moves faster. Data meshes can help your organization decentralize data, giving ownership back to the engineers who produced it. This book provides a concise yet comprehensive overview of data mesh patterns for streaming and real-time data services. Authors Hubert Dulay and Stephen Mooney examine the vast differences between streaming and batch data meshes. Data engineers, architects, data product owners, and those in DevOps and MLOps roles will learn steps for implementing a streaming data mesh, from defining a data domain to building a good data product. Through the course of the book, you'll create a complete self-service data platform and devise a data governance system that enables your mesh to work seamlessly. With this book, you will: Design a streaming data mesh using Kafka Learn how to identify a domain Build your first data product using self-service tools Apply data governance to the data products you create Learn the differences between synchronous and asynchronous data services Implement self-services that support decentralized data

Models in Natural Language Processing are fun to train but can be difficult to deploy. The size of their models, libraries and necessary files can be challenging, especially in a microservice environment. When services should be built as lightweight and slim as possible, large (language) models can lead to a lot of problems. With a recent real-world use case as an example, which runs productively for over a year and in 10 different languages, I will walk you through my experiences with deploying NLP models. What kind of pitfalls, shortcuts, and tricks are possible while bringing an NLP model to production?

In this talk, you will learn about different ways and possibilities to deploy NLP services. I will speak briefly about the way leading from data to model and a running service (without going into much detail) before I will focus on the MLOps part in the end. I will take you with me on my past journey of struggles and successes so that you don’t need to take these detours by yourselves.

This talk presents a novel approach to MLOps that combines the benefits of open-source technologies with the power and cost-effectiveness of cloud computing platforms. By using tools such as Terraform, MLflow, and Feast, we demonstrate how to build a scalable and maintainable ML system on the cloud that is accessible to ML Engineers and Data Scientists. Our approach leverages cloud managed services for the entire ML lifecycle, reducing the complexity and overhead of maintenance and eliminating the vendor lock-in and additional costs associated with managed MLOps SaaS services. This innovative approach to MLOps allows organizations to take full advantage of the potential of machine learning while minimizing cost and complexity.

Everybody knows our yellow vans, trucks and planes around the world. But do you know how data drives our business and how we leverage algorithms and technology in our core operations? We will share some “behind the scenes” insights on Deutsche Post DHL Group’s journey towards a Data-Driven Company. • Large-Scale Use Cases: Challenging and high impact Use Cases in all major areas of logistics, including Computer Vision and NLP • Fancy Algorithms: Deep-Neural Networks, TSP Solvers and the standard toolkit of a Data Scientist • Modern Tooling: Cloud Platforms, Kubernetes , Kubeflow, Auto ML • No rusty working mode: small, self-organized, agile project teams, combining state of the art Machine Learning with MLOps best practices • A young, motivated and international team – German skills are only “nice to have” But we have more to offer than slides filled with buzzwords. We will demonstrate our passion for our work, deep dive into our largest use cases that impact your everyday life and share our approach for a timeseries forecasting library - combining data science, software engineering and technology for efficient and easy to maintain machine learning projects..

At the boundary of model development and MLOps lies the balance between the speed of deploying new models and ensuring operational constraints. These include factors like low latency prediction, the absence of vulnerabilities in dependencies and the need for the model behavior to stay reproducible for years. The longer the list of constraints, the longer it usually takes to take a model from its development environment into production. In this talk, we present how we seemingly managed to square the circle and have both a rapid, highly dynamic model development and yet also a stable and high-performance deployment.

The nightmare before data science production: You found a working prototype for your problem using a Jupyter notebook and now it's time to build a production grade solution from that notebook. Unfortunately, your notebook looks anything but production grade. The good news is, there's finally a cure!

The open-source python package LineaPy aims to automate data science workflow generation and expediting the process of going from data science development to production. And truly, it transforms messy notebooks into data pipelines like Apache Airflow, DVC, Argo, Kubeflow, and many more. And if you can't find your favorite orchestration framework, you are welcome to work with the creators of LineaPy to contribute a plugin for it!

In this talk, you will learn the basic concepts of LineaPy and how it supports your everyday tasks as a data practitioner. For this purpose, we will transform a notebook step by step together to create a DVC pipeline. Finally, we will discuss what place LineaPy will take in the MLOps universe. Will you only have to check in your notebook in the future?

Data Fabric and Data Mesh Approaches with AI: A Guide to AI-based Data Cataloging, Governance, Integration, Orchestration, and Consumption

Understand modern data fabric and data mesh concepts using AI-based self-service data discovery and delivery capabilities, a range of intelligent data integration styles, and automated unified data governance—all designed to deliver "data as a product" within hybrid cloud landscapes. This book teaches you how to successfully deploy state-of-the-art data mesh solutions and gain a comprehensive overview on how a data fabric architecture uses artificial intelligence (AI) and machine learning (ML) for automated metadata management and self-service data discovery and consumption. You will learn how data fabric and data mesh relate to other concepts such as data DataOps, MLOps, AIDevOps, and more. Many examples are included to demonstrate how to modernize the consumption of data to enable a shopping-for-data (data as a product) experience. By the end of this book, you will understand the data fabric concept and architecture as it relates to themes such as automated unifieddata governance and compliance, enterprise information architecture, AI and hybrid cloud landscapes, and intelligent cataloging and metadata management. What You Will Learn Discover best practices and methods to successfully implement a data fabric architecture and data mesh solution Understand key data fabric capabilities, e.g., self-service data discovery, intelligent data integration techniques, intelligent cataloging and metadata management, and trustworthy AI Recognize the importance of data fabric to accelerate digital transformation and democratize data access Dive into important data fabric topics, addressing current data fabric challenges Conceive data fabric and data mesh concepts holistically within an enterprise context Become acquainted with the business benefits of data fabric and data mesh Who This Book Is For Anyone who is interested in deploying modern data fabric architectures and data mesh solutions within an enterprise, including IT and business leaders, data governance and data office professionals, data stewards and engineers, data scientists, and information and data architects. Readers should have a basic understanding of enterprise information architecture.

Summary

This podcast started almost exactly six years ago, and the technology landscape was much different than it is now. In that time there have been a number of generational shifts in how data engineering is done. In this episode I reflect on some of the major themes and take a brief look forward at some of the upcoming changes.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Your host is Tobias Macey and today I'm reflecting on the major trends in data engineering over the past 6 years

Interview

Introduction 6 years of running the Data Engineering Podcast Around the first time that data engineering was discussed as a role

Followed on from hype about "data science"

Hadoop era Streaming Lambda and Kappa architectures

Not really referenced anymore

"Big Data" era of capture everything has shifted to focusing on data that presents value

Regulatory environment increases risk, better tools introduce more capability to understand what data is useful

Data catalogs

Amundsen and Alation

Orchestration engine

Oozie, etc. -> Airflow and Luigi -> Dagster, Prefect, Lyft, etc. Orchestration is now a part of most vertical tools

Cloud data warehouses Data lakes DataOps and MLOps Data quality to data observability Metadata for everything

Data catalog -> data discovery -> active metadata

Business intelligence

Read only reports to metric/semantic layers Embedded analytics and data APIs

Rise of ELT

dbt Corresponding introduction of reverse ETL

What are the most interesting, unexpected, or challenging lessons that you have learned while working on running the podcast? What do you have planned for the future of the podcast?

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Materialize: Materialize

Looking for the simplest way to get the freshest data possible to your teams? Because let's face it: if real-time were easy, everyone would be using it. Look no further than Materialize, the streaming database you already know how to use.

Materialize’s PostgreSQL-compatible interface lets users leverage the tools they already use, with unsurpassed simplicity enabled by full ANSI SQL support. Delivered as a single platform with the separation of storage and compute, strict-serializability, active replication, horizontal scalability and workload isolation — Materialize is now the fastest way to build products with streaming data, drastically reducing the time, expertise, cost and maintenance traditionally associated with implementation of real-time features.

Sign up now for early access to Materialize and get started with the power of streaming data with the same simplicity and low implementation cost as batch cloud data warehouses.

Go to materialize.comSupport Data Engineering Podcast

Operational AI for the Modern Data Stack

The opportunities for AI and machine learning are everywhere in modern businesses, but today's MLOps ecosystem is drowning in complexity. In this talk, we'll show how to use dbt and Continual to scale operational AI — from customer churn predictions to inventory forecasts — without complex engineering or operational burden.

Check the slides here: https://docs.google.com/presentation/d/1vNcQxCjAK4xZVZC1ZHzqBzPiJE7uwhDIVWGeT9Poi1U/edit#slide=id.g15b1f544dd5_0_1500

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.