talk-data.com talk-data.com

R

Speaker

Rob Martin

2

talks

VP of Technology Rehrig Pacific
Filtering by: Google Cloud Next '24 ×

Filter by Event / Source

Talks & appearances

Showing 2 of 3 activities

Search activities →
session
with Vaibhav Singh (Google Cloud) , Erik Nijkamp (Salesforce) , Amanpreet Singh (contextual.ai) , Rob Martin (Rehrig Pacific)

Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn what’s new with Memorystore, including a deep dive into its latest generative AI launches and integrations. Dig into the latest Google Cloud Next launch announcements and how top customers are leveraging Memorystore for Redis Cluster for its speed, reliability, and ease of use. Discover how zero-downtime scaling (both in and out) can empower developers to start small and scale out as their applications grow – always ensuring reliability and performance for their most critical workloads.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.