In this presentation, we delve into the cutting-edge realm of large-scale AI training and inference, focusing on the open models and their deployment on Google Cloud Accelerators. Open models such as the Llama family of LLMs and Gemma are state-of-the-art language models that demand robust computational resources and efficient strategies for training and inference at scale. This session aims to provide a comprehensive guide on harnessing the power of PyTorch on Google Cloud Accelerators, specifically designed to meet the high-performance requirements of such models.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
talk-data.com
N
Speaker
Nisha Mariam Johnson
2
talks
Product Manager
Google Cloud
Filter by Event / Source
Talks & appearances
2 activities · Newest first
with
Nisha Mariam Johnson
(Google Cloud)
,
Sapir Weissbuch
(Lightricks)
,
Philipp Schmid
(Google DeepMind)
,
Milad Mohammadi
(Google Cloud)
More generative AI models are built on PyTorch than on any other framework. We partner with Lightricks to share how PyTorch/XLA offers a performant, automatic compiler experience with all the ease-of-use and ecosystem benefits of PyTorch. Learn from Hugging Face as they share more about the latest features that improve PyTorch/XLA performance and usability on GPUs and TPUs.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.