talk-data.com talk-data.com

Topic

vod-recorded-session

410

tagged

Activity Trend

351 peak/qtr
2020-Q1 2026-Q1

Activities

410 activities · Newest first

session
by Moontae Lee (LG AI Research) , Cesar Naranjo (Moloco) , Chelsie Czop (Google Cloud) , Kshetrajna Radhaven (Shopify) , Newfel Harrat (Google Cloud) , Kasper Piskorski, PhD (Technology Innovation Institute)

AI Hypercomputer is a revolutionary system designed to make implementing AI at scale easier and more efficient. In this session, we’ll explore the key benefits of AI Hypercomputer and how it simplifies complex AI infrastructure environments. Then, learn firsthand from industry leaders Shopify, Technology Innovation Institute, Moloco, and LG AI Research on how they leverage Google Cloud’s AI solutions to drive innovation and transform their businesses.

APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.

Bring your laptop and join us for an interactive demo on how to apply large language models (LLMs) from the Vertex AI Model Garden to a business use case, and learn about best practices for monitoring these models in production. We’ll go through an exercise using Colab Enterprise notebooks and learn how to use out-of-the-box tools to monitor RED (rate, error, duration) metrics, configure alerts, and monitor the rate of successful predictions in order to ensure successful use of a Vertex AI model in production.

This panel explores the potential of cloud technologies for operational technology (OT) and the critical need for proactive cybersecurity measures. The convergence of OT and IT, driven by cloud adoption, presents both opportunities and challenges. Panelists will examine the benefits of cloud-based OT, such as increased efficiency, scalability, data-driven insights, and resilience, along with the opportunity to build with security in mind.

Firestore with MongoDB compatibility is a serverless database service  designed to maximize scalability, high availability and performance without the hidden costs of capacity planning. This session demonstrates the new Firestore with MongoDB compatibility capabilities and discusses how Dialpad has built an Ai-powered customer communications platform leveraging Firestore over the last 14 years to grow a successful, performant, reliable business.

Learn how to manage security controls and licenses for thousands of users, and tie it all together with APIs. We’ll show you ways to manage developer access more efficiently, build custom management integrations, and keep your CISO happy at the same time. We’ll also demo the new Gemini Code Assist integration with Apigee, which lets developers use Gemini Code Assist chat to generate context-aware OpenAPI specifications that reuse components from other APIs in their organization for efficiency and reference organizational security standards.

Join us for an in-depth session on Firebase Genkit, an open source framework that simplifies the development of AI-powered applications. Discover how to use the Node.js and Go SDKs to build intelligent chatbots, multimodal content generators, streamlined automation workflows, and agentive experiences. We'll demonstrate how Genkit's unified interface seamlessly integrates Google's Gemini and Imagen models, self-hosted Ollama options, and a variety of popular models from Vertex AI Model Garden. 

Migrating from AWS or Azure to Google Cloud runtimes can feel like navigating a maze of complex services and dependencies. In this session, we’ll explore key considerations for migrating legacy applications, emphasizing the “why not modernize?” approach with a practice guide. We’ll share real-world examples of successful transformations. And we’ll go beyond theory with a live product demo that showcases migration tools, and a code assessment demo powered by Gemini that demonstrates how you can understand and modernize legacy code.

JavaScript gets a lot of flak for not being strongly typed. But if you’re running JavaScript in production today, you don’t need to wait for runtime errors to catch problems. TypeScript has taken JavaScript from a loosely typed language, where a variable can change from a string to a number without warning, and made it strongly typed. Now Zod and Effect are here to tame even the wildest unknown parameters from your users. We’ll demonstrate using these tools in an application and we’ll deploy that application to Google Cloud.

Simplify real-time data analytics and build event-driven, AI-powered applications using BigQuery and Pub/Sub. Learn to ingest and process massive streaming data from users, devices, and microservices for immediate insights and rapid action. Explore BigQuery's continuous queries for real-time analytics and ML model training. Discover how Flipkart, India’s leading e-commerce platform, leverages Google Cloud to build scalable, efficient real-time data pipelines and AI/ML solutions, and gain insights on driving business value through real–time data.

Gemini 2.0 was built for the agentic era – from native tool use to function calling to robust support for multimodal understanding, the new frontier of applications are agentic. Join this session to explore the frontier of agents, where the best opportunities are for developers to build, open research areas to scale to billions of agents, and how to best leverage Gemini.

Unlock the power of generative AI with retrieval augmented generation (RAG) on Google Cloud. In this session, we’ll navigate key architectural decisions to deploy and run RAG apps: from model and app hosting to data ingestion and vector store choice. We’ll cover reference architecture options – from an easy-to-deploy approach with Vertex AI RAG Engine, to a fully managed solution on Vertex AI, to a flexible DIY topology with Google Kubernetes Engine and open source tools – and compare trade-offs between operational simplicity and granular control.

Explore how distributed cloud solutions solve for computing in sensitive on-premises, harsh, and remote environments with stringent regulatory and sovereignty requirements. Learn how these solutions enable secure access to advanced cloud capabilities like data analytics and AI within completely isolated environments, ensuring strict compliance and data residency. Key implementation and management considerations will be discussed.

This session explores the evolution of data management on Kubernetes for AI and machine learning (ML) workloads and modern databases, including Google’s leadership in this space. We’ll discuss key challenges and solutions, including persistent storage with solutions like checkpointing and Cloud Storage FUSE, and accelerating data access with caching. Customers Qdrant and Codeway will share how they’ve successfully leveraged these technologies to improve their AI, ML, and database performance on Google Kubernetes Engine (GKE).

The rise of AI-powered code generation tools presents a compelling alternative to traditional UI prototyping frameworks. This talk explores the question: Is it time to ditch the framework overhead and embrace core web technologies (such as HTML, CSS, JavaScript) for faster, more flexible prototyping? We’ll examine the trade-offs between structured frameworks and the granular control offered by a “bare metal” approach, augmented by AI assistance. Learn when leveraging AI with core tech becomes the smarter choice, enabling rapid iteration and bespoke UI designs, and when frameworks still reign supreme.

Experience the power of AlloyDB Omni, a cutting-edge PostgreSQL-compatible database designed for multicloud and hybrid cloud environments. This session explores how AlloyDB Omni accelerates the development of modern applications, enabling generative AI experiences, efficient vector search, real-time operational analytics, and scalable transactional performance. We’ll also showcase how to run your applications on multiple clouds using Aiven’s seamless managed service, and how to supercharge hybrid cloud deployments with cloud-ready partners.

Unleash the full potential of large language models (LLMs) on your edge devices, even when there’s spotty internet. This session explores a hybrid approach that combines the power of cloud-based LLMs with the efficiency of on-device models. Learn how to intelligently route queries, enabling laptops and mobile phones to perform complex tasks while maintaining snappy performance. View demos of efficient task routing that optimizes for quality and cost to ensure your apps run smoothly, even during network disruptions.

The open-source AI agent landscape is thriving with innovation. This session is your definitive guide to building, deploying, and monitoring OSS agent frameworks. Learn patterns and best practices that combine the best of open-source frameworks with the AI platform built for production – Vertex AI. In this session, we'll cover techniques for multi-agent orchestration, working with diverse data sources, and building autonomous workflows. Join us on a journey from open-source agent frameworks to production-grade agent deployments and LLMOps on Vertex AI.