talk-data.com talk-data.com

R

Speaker

Rob Martin

3

talks

VP of Technology Rehrig Pacific

Filter by Event / Source

Talks & appearances

3 activities · Newest first

Search activities →
AWS re:Invent 2024 - Simplify business scenario analysis with Amazon Q in QuickSight (BSI104-NEW)

Amazon Q in QuickSight now includes an AI-assisted data analysis experience (in preview) that helps users find answers to complex problems quickly. Amazon Q simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization. Hear how customers like GoDaddy and Rehrig Pacific Company are using Amazon Q to model solutions to complex problems without specialized skills or spreadsheets, all within the QuickSight environment. Discover how you can make better decisions, faster, with AI-assisted data analysis.

Learn more: AWS re:Invent: https://go.aws/reinvent. More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

About AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2024

session
with Vaibhav Singh (Google Cloud) , Erik Nijkamp (Salesforce) , Amanpreet Singh (contextual.ai) , Rob Martin (Rehrig Pacific)

Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn what’s new with Memorystore, including a deep dive into its latest generative AI launches and integrations. Dig into the latest Google Cloud Next launch announcements and how top customers are leveraging Memorystore for Redis Cluster for its speed, reliability, and ease of use. Discover how zero-downtime scaling (both in and out) can empower developers to start small and scale out as their applications grow – always ensuring reliability and performance for their most critical workloads.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.