Leveraging the remarkable performance and ample memory capacity of the Google Cloud's TPU v5p and AI Hypercomputer, Lightricks successfully trained our generative text-to-video model without splitting it into separate processes. This optimal hardware utilization significantly accelerates each training cycle, allowing us to swiftly conduct a series of experiments using a TPU v5p - 128 pod. The ability to train our model quickly in each experiment facilitates rapid iteration, which is an invaluable advantage for our research team in this competitive field of generative AI.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
talk-data.com
S
Speaker
Sapir Weissbuch
2
talks
Researcher
Lightricks
Filter by Event / Source
Talks & appearances
2 activities · Newest first
with
Nisha Mariam Johnson
(Google Cloud)
,
Sapir Weissbuch
(Lightricks)
,
Philipp Schmid
(Google DeepMind)
,
Milad Mohammadi
(Google Cloud)
More generative AI models are built on PyTorch than on any other framework. We partner with Lightricks to share how PyTorch/XLA offers a performant, automatic compiler experience with all the ease-of-use and ecosystem benefits of PyTorch. Learn from Hugging Face as they share more about the latest features that improve PyTorch/XLA performance and usability on GPUs and TPUs.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.