talk-data.com talk-data.com

Google Cloud Next session 2024-04-10 at 21:15

Cost efficient serving of stable diffusion models using Cloud TPUs

Description

Text-to-image generative AI models such as the Stable Diffusion family of models are rapidly growing in popularity. In this session, we explain how to optimize every layer of your serving architecture – including TPU accelerators, orchestration, model server, and ML framework – to gain significant improvements in performance and cost effectiveness. We introduce many new innovations in Google Kubernetes Engine that improve the cost effectiveness of AI inference, and we provide a deep dive into MaxDiffusion, a brand new library for deploying scalable stable diffusion workloads on TPUs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.