Relying on third-party-hosted artificial intelligence (AI) models may not always be an option for your application. You are also not guaranteed support for those model endpoints through your next software release. By hosting state-of-the-art AI models like Gemma in the same environment as AlloyDB Omni, you can run scalable generative AI apps on any cloud or on premises, for regulatory needs, with low latency. Learn from Neuropace how they run Omni for enterprise-grade vector search capabilities in their local environment, which contains sensitive customer workloads.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
talk-data.com
S
Speaker
Sharanya Desai
1
talks
Director of AI, Technical Fellow
Neuropace
Filter by Event / Source
Talks & appearances
1 activities · Newest first