Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
talk-data.com
Topic
MLOps
233
tagged
Activity Trend
Top Events
Imagine performing complex regulatory checks in minutes instead of days. We made this a reality using GenAI on the Databricks Data Intelligence Platform. Join us for a deep dive into our journey from POC to a production-ready AI audit tool. Discover how we automated thousands of legal requirement checks in annual reports with remarkable speed and accuracy. Learn our blueprint for: High-Performance AI: Building a scalable, >90% accurate AI system with an optimized RAG pipeline that auditors praise. Robust Productionization: Achieving secure, governed deployment using Unity Catalog, MLflow, LLM-based evaluation, and MLOps best practices. This session provides actionable insights for deploying impactful, compliant GenAI in the enterprise.
Ready to streamline your ML lifecycle? Join us to explore MLflow 3.0 on Databricks, where we'll show you how to manage everything from experimentation to production with less effort and better results. See how this powerful platform provides comprehensive tracking, evaluation, and deployment capabilities for traditional ML models and cutting-edge generative AI applications. Key takeaways: Track experiments automatically to compare model performance Monitor models throughout their lifecycle across environments Manage deployments with robust versioning and governance Implement proven MLOps workflows across development stages Build and deploy generative AI applications at scale Whether you're an MLOps novice or veteran, you'll walk away with practical techniques to accelerate your ML development and deployment.
Botnet attacks mobilize digital armies of compromised devices that continuously evolve, challenging traditional security frameworks with their high-speed, high-volume nature. In this session, we will reveal our advanced system — developed on the Databricks platform — that leverages cutting-edge AI/ML capabilities to detect and mitigate bot attacks in near-real time. We will dive into the system’s robust architecture, including scalable data ingestion, feature engineering, MLOps strategies & production deployment of the system. We will address the unique challenges of processing bulk HTTP traffic data, time-series anomaly detection and attack signature identification. We will demonstrate key business values through downtime minimization and threat response automation. With sectors like healthcare facing heightened risks, ensuring data integrity and service continuity is vital. Join us to uncover lessons learned while building an enterprise-grade solution that stays ahead of adversaries.
Don't miss this session where we demonstrate how the Texas Rangers baseball team is staying one step ahead of the competition by going back to the basics. After implementing a modern data strategy with Databricks and winnng the 2023 World Series the rest of the league quickly followed suit. Now more than ever, data and AI are a central pillar of every baseball team's strategy driving profound insights into player performance and game dynamics. With a 'fundamentals win games' back to the basics focus, join us as we explain our commmitment to world-class data quality, engineering, and MLOPS by taking full advantage of the Databricks Data Intelligence Platform. From system tables to federated querying, find out how the Rangers use every tool at their disposal to stay one step ahead in the hyper competitive world of baseball.
Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.
DSPy is a framework for authoring GenAI applications with automatic prompt optimization, while MLflow provides powerful MLOps tooling to track, monitor, and productize machine learning workflows. In this lightning talk, we demonstrate how to integrate MLflow with DSPy to bring full observability to your DSPy development. We’ll walk through how to track DSPy module calls, evaluations, and optimizers using MLflow’s tracing and autologging capabilities. By the end, you'll see how combining these two tools makes it easier to debug, iterate, and understand your DSPy workflows, then deploy your DSPy program — end to end.
Join us as we dive into how Turnpoint Services, in collaboration with DataNimbus, built an Intelligence Platform on Databricks in just 30 days. We'll explore features like MLflow, LLMs, MLOps, Model Registry, Unity Catalog & Dashboard Alerts that powered AI applications such as Demand Forecasting, Customer 360 & Review Automation. Turnpoint’s transformation enabled data-driven decisions, ops efficiency & a better customer experience. Building a modern data foundation on Databricks optimizes resource allocation & drives engagement. We’ll also introduce innovations in DataNimbus Designer: AI Blocks: modular, prompt-driven smart transformers for text data, built visually & deployed directly within Databricks. These capabilities push the boundaries of what's possible on the Databricks platform. Attendees will gain practical insights, whether you're beginning your AI journey or looking to accelerate it.
This in-depth session explores advanced MLOps practices for implementing production-grade machine learning workflows on Databricks. We'll examine the complete MLOps journey from foundational principles to sophisticated implementation patterns, covering essential tools including MLflow, Unity Catalog, Feature Stores and version control with Git. Dive into Databricks' latest MLOps capabilities including MLflow 3.0, which enhances the entire ML lifecycle from development to deployment with particular focus on generative AI applications. Key session takeaways include: Advanced MLflow 3.0 features for LLM management and deployment Enterprise-grade governance with Unity Catalog integration Robust promotion patterns across development, staging and production CI/CD pipeline automation for continuous deployment GenAI application evaluation and streamlined deployment
Struggling to implement traditional machine learning models that deliver real business value? Join us for a hands-on exploration of classical ML techniques powered by Databricks' Mosaic AI platform. This session focuses on time-tested approaches like regression, classification and clustering — showing how these foundational methods can solve real business problems when combined with Databricks' scalable infrastructure and MLOps capabilities. Key takeaways: Building production-ready ML pipelines for common business use cases including customer segmentation, demand forecasting and anomaly detection Optimizing model performance using Databricks' distributed computing capabilities for large-scale datasets Implementing automated feature engineering and selection workflows Establishing robust MLOps practices for model monitoring, retraining and governance Integrating classical ML models with modern data processing techniques
As machine learning (ML) models scale in complexity and impact, organizations must establish a robust MLOps foundation to ensure seamless model deployment, monitoring and retraining. In this session, we’ll share how we leverage Databricks as the backbone of our MLOps ecosystem — handling everything from workflow orchestration to large-scale inference. We’ll walk through our journey of transitioning from fragmented workflows to an integrated, scalable system powered by Databricks Workflows. You’ll learn how we built an automated pipeline that streamlines model development, inference and monitoring while ensuring reliability in production. We’ll also discuss key challenges we faced, lessons learned and best practices for organizations looking to operationalize ML with Databricks.
This course will guide participants through a comprehensive exploration of machine learning model operations, focusing on MLOps and model lifecycle management. The initial segment covers essential MLOps components and best practices, providing participants with a strong foundation for effectively operationalizing machine learning models. In the latter part of the course, we will delve into the basics of the model lifecycle, demonstrating how to navigate it seamlessly using the Model Registry in conjunction with the Unity Catalog for efficient model management. By the course's conclusion, participants will have gained practical insights and a well-rounded understanding of MLOps principles, equipped with the skills needed to navigate the intricate landscape of machine learning model operations. Pre-requisites: Familiarity with Databricks workspace and notebooks, familiarity with Delta Lake and Lakehouse, intermediate level knowledge of Python (e.g. understanding of basic MLOps concepts and practices as well as infrastructure and importance of monitoring MLOps solutions) Labs: Yes Certification Path: Databricks Certified Machine Learning Associate
Adopting MLOps is getting increasingly important with the rise of AI. A lot of different features are required to do MLOps in large organizations. In the past, you had to implement these features yourself. Luckily, the MLOps space is getting more mature, and end-to-end platforms like Databricks provide most of the features. In this talk, I will walk through the MLOps components and how you can simplify your processes using Databricks. Audio for this session is delivered in the conference mobile app, you must bring your own headphones to listen.
As a global energy leader, Petrobras relies on machine learning to optimize operations, but manual model deployment and validation processes once created bottlenecks that delayed critical insights. In this session, we’ll reveal how we revolutionized our MLOps framework using MLflow, Databricks Asset Bundles (DABs) and Unity Catalog to: Replace error-prone manual validation with automated metric-driven workflows Reduce model deployment timelines from days to hours Establish granular governance and reproducibility across production models Discover how we enabled data scientists to focus on innovation—not infrastructure—through standardized pipelines while ensuring compliance and scalability in one of the world’s most complex energy ecosystems.
Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
How to scale AI agents in your organization.
Dive into end-to-end Machine Learning Operations (MLOps) with Vertex AI. Discover how to integrate data, models, and workflows to build scalable, reliable ML pipelines. Learn how Vertex AI automates key ML lifecycle stages like training, hyperparameter tuning, deployment, and monitoring. Whether you're starting out or optimizing workflows, this session covers best practices and real-world use cases to transform your approach to AI development. Perfect for data scientists and ML engineers!
Faster AI innovation cycles often require MLOps – a critical, but complex, undertaking. Join us in this session to discover how a platform approach to MLOps on Vertex AI simplifies and accelerates the entire AI life cycle. Learn how to streamline development, deployment, and management of your AI models. We’ll also share real-world success stories from customers like GSK.
Discover how to break free from the cycle of endless AI Proofs of Concept (POC) and unlock scalable, enterprise-wide impact. In this session, we’ll explore proven strategies for operationalizing AI, including leveraging cloud-native solutions like Vertex AI, building robust MLOps pipelines, and defining measurable ROI tied to business goals. Through real-world examples and actionable insights, learn how to overcome common scaling challenges, drive cultural adoption, and future-proof your AI strategy for sustained innovation and success.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.