Get the most out of your Google Cloud budget. This session covers cost-optimization strategies for Compute Engine and beyond, including Cloud Run, Vertex AI, and Autopilot in Google Kubernetes Engine. Learn how to effectively manage your capacity reservations and leverage consumption models like Spot VMs, Dynamic Workload Scheduler, and committed use discounts (CUDs) to achieve the optimum levels of capacity availability for your workloads while optimizing your cost.
talk-data.com
A
Speaker
Ari Liberman
2
talks
Group Product Manager, Compute
Google Cloud
Filter by Event / Source
Talks & appearances
2 activities · Newest first
The growth in AI/ML training, fine tuning, and inference workloads has created exponential demand for GPU capacity, making accelerators a scarce resource.
Join this session to learn:
- How Dynamic Workload Scheduler (DWS) works and how you can use it today
- About Compute Engine consumption models, including on-demand, spot, and future reservations
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.