talk-data.com
Accelerate AI inference workloads with Google Cloud TPUs and GPUs
Speakers
Topics
Description
Deploying AI models at scale demands high-performance inference capabilities. Google Cloud offers a range of cloud tensor processing units (TPUs) and NVIDIA-powered graphics processing unit (GPU) VMs. This session will guide you through the key considerations for choosing TPUs and GPUs for your inference needs. Explore the strengths of each accelerator for various workloads like large language models and generative AI models. Discover how to deploy and optimize your inference pipeline on Google Cloud using TPUs or GPUs. Understand the cost implications and explore cost-optimization strategies.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.