Join an insightful fireside chat with Jeff Dean, a pioneering force behind Google’s AI leadership. As Google's Chief Scientist at DeepMind & Research, Jeff will share his vision on AI and specialized AI hardware, including Cloud TPUs seventh generation chip; Ironwood. What exciting things might we expect this to power? What drives Google’s innovation in specialized AI hardware? In this spotlight, we’ll also discuss how TPUs enable efficient large-scale training and optimal inference workloads including exclusive, never-before-revealed details of Ironwood, differentiated chip designs, data center infrastructure, and software stack co-designs that makes Google Cloud TPUs the most compelling choice for AI workloads.
talk-data.com
Speaker
Sabastian Mugazambi
3
talks
Frequent Collaborators
Filter by Event / Source
Talks & appearances
3 activities · Newest first
Implementing generative AI applications with large language models (LLMs), and diffusion models requires large amounts of computation that can seamlessly scale to train, fine-tune, and serve the models. Google Cloud TPUs. Cohere is leveraging the compute-heavy Cloud TPU v4 and v5e to train sophisticated gen AI models that meet the heightened needs of their enterprise users. Check out how Cohere and Cloud TPUs are delivering enterprise-tailored large language models (LLMs) that can help increase business productivity by automating time-consuming and monotonous workflows. Please note: seating is limited and on a first-come, first served basis; standing areas are available
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Join an insightful fireside chat with Jeff Dean, a pioneering force behind Google’s AI leadership. As Google's Chief Scientist at DeepMind & Research, Jeff will share his vision on AI and specialized AI hardware, including Cloud TPUs seventh generation chip; Ironwood. What exciting things might we expect this to power? What drives Google’s innovation in specialized AI hardware? In this spotlight, we’ll also discuss how TPUs enable efficient large-scale training and optimal inference workloads including exclusive, never-before-revealed details of Ironwood, differentiated chip designs, data center infrastructure, and software stack co-designs that makes Google Cloud TPUs the most compelling choice for AI workloads.