talk-data.com talk-data.com

Topic

Cloud Run

Google Cloud Run

serverless containers google_cloud

51

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

51 activities · Newest first

Discover how Apache Airflow powers scalable ELT pipelines, enabling seamless data ingestion, transformation, and machine learning-driven insights. This session will walk through: Automating Data Ingestion: Using Airflow to orchestrate raw data ingestion from third-party sources into your data lake (S3, GCP), ensuring a steady pipeline of high-quality training and prediction data. Optimizing Transformations with Serverless Computing: Offloading intensive transformations to serverless functions (GCP Cloud Run, AWS Lambda) and machine learning models (BigQuery ML, Sagemaker), integrating their outputs seamlessly into Airflow workflows. Real-World Impact: A case study on how INTRVL leveraged Airflow, BigQuery ML, and Cloud Run to analyze early voting data in near real-time, generating actionable insights on voter behavior across swing states. This talk not only provides a deep dive into the Political Tech space but also serves as a reference architecture for building robust, repeatable ELT pipelines. Attendees will gain insights into modern serverless technologies from AWS and GCP that enhance Airflow’s capabilities, helping data engineers design scalable, cloud-agnostic workflows.

Move beyond theoretical AI concepts and dive into building practical, production-ready AI agents in this intensive, hands-on workshop. We'll harness the power of Gemini 2.5, utilizing Google's latest open-source framework designed for building sophisticated single and multi-agent systems, Agent Development Kit (ADK).

What You Will Learn:

This workshop covers the full stack of agent development, from data integration to final deployment and runtime.

Develop with ADK: Learn to build modular, controllable agents using the ADK framework.

Enable Agent Collaboration: Implement sophisticated multi-agent workflows using the open Agent-to-Agent (A2A) protocol..

Deploy & Orchestrate: Transition your agents from development to production using flexible deployment strategies. We'll cover deploying and managing agents using Vertex AI Agent Engine, and Google Cloud Run. Learn to scale, manage, and ensure the reliability of your agent systems in the cloud.

Production Focus: Understand the complete lifecycle, including packaging agents, deploying them reliably on Google Cloud.

Introduce Observability: Briefly cover essential concepts for monitoring your production agents

By the end of this workshop, you'll have practical experience building and orchestrating multi-agent systems using Google's cutting-edge AI technologies, preparing you to implement real-world, production-grade AI solutions.

Connect with fellow developers building and running applications in production on Cloud Run. We'll discuss the latest announcements, new features, and best practices for scaling and managing deployments. This is a great opportunity to share your real-world use cases, discuss challenges you’ve faced, and connect with others who rely on Cloud Run.

Leverage the flexibility of Cloud Run and its ease of use for your Apache Kafka workloads. In this session, we’ll introduce Cloud Run worker pools, a new resource specifically designed for non-request-based workloads, like Kafka consumers. Learn how worker pools, along with a self-hosted Kafka autoscaler, can enable fast and flexible scaling of your Kafka consumers by using Kafka queue metrics.

Time to make generative AI a reality for your application. This session is all about how to build high-performance gen AI applications fast with Cloud SQL for MySQL and PostgreSQL. Learn about Google Cloud’s innovative full-stack solutions that make gen AI app development, deployment, and operations simple and easy – even when deploying high-performance, production-grade applications. We’ll highlight best practices for getting started with Vertex AI, Cloud Run, Google Kubernetes Engine, and Cloud SQL, so that you can focus on gen AI application development from the get-go.

Get the most out of your Google Cloud budget. This session covers cost-optimization strategies for Compute Engine and beyond, including Cloud Run, Vertex AI, and Autopilot in Google Kubernetes Engine. Learn how to effectively manage your capacity reservations and leverage consumption models like Spot VMs, Dynamic Workload Scheduler, and committed use discounts (CUDs) to achieve the optimum levels of capacity availability for your workloads while optimizing your cost.

session
by Harjot Gill (CodeRabbit) , Steren Giannini (Google Cloud) , Harrison Chase (LangChain) , Wietse Venema (Google Cloud)

Cloud Run is an ideal platform for hosting AI applications – for example, you can use Cloud Run with AI frameworks like LangChain or Firebase Genkit to orchestrate calls to AI models on Vertex AI, vector databases, and other APIs. In this session, we’ll dive deep into building AI agents on Cloud Run to solve complex tasks and explore several techniques, including tool calling, multi-agent systems, memory state management, and code execution. We’ll showcase interactive examples using popular frameworks.

Join us to discuss serverless computing and event-driven architectures with Cloud Run functions. Learn a quick and secure way to connect services and build event-driven architectures with multiple trigger types (HTTP, Pub/Sub, and Eventarc). And get introduced to Eventarc Advanced, centralized access control to your events with support for cross-project delivery.

This session dives into the latest advancements in securing and managing your Cloud Run workloads at enterprise scale. Join us to learn about new features and techniques to meet the highest security standards, strategies for managing large-scale deployments, and solutions to common issues like IP exhaustion. Plus, one of our customers will share their firsthand experience managing a massive fleet of Cloud Run workloads.

Join us for an interactive session where we’ll build, deploy, and scale inference apps. Imagine creating and launching generative AI apps that deliver personalized recommendations and stunning images, all with the unparalleled efficiency and scalability of serverless computing. You’ll learn how to build gen AI apps effortlessly using Gemini Code Assist; deploy gen AI apps in minutes on Cloud Run, using Vertex AI or on-demand, scale-to-zero serverless GPUs; and optimize the performance and cost of AI workloads by implementing best practices.

This talk demonstrates a fashion app that leverages the power of AlloyDB, Google Cloud’s fully managed PostgreSQL-compatible database, to provide users with intelligent recommendations for matching outfits. User-uploaded data of their clothes triggers a styling insight on how to pair the outfit with matching real-time fashion advice. This is enabled through an intuitive contextual search (vector search) powered by AlloyDB and Google’s ScaNN index to deliver faster vector search results, low-latency querying, and response times. While we’re at it, we’ll showcase the power of the AlloyDB columnar engine on joins required by the application to generate style recommendations. To complete the experience, we’ll engage the Vertex AI Gemini API package from Spring and LangChain4j integrations for generative recommendations and a visual representation of the personalized style. This entire application is built on a Java Spring Boot framework and deployed serverlessly on Cloud Run, ensuring scalability and cost efficiency. This talk explores how these technologies work together to create a dynamic and engaging fashion experience.

Discover how to transition from legacy, siloed systems to a unified, scalable, and insights-driven data platform on GCP. This session will cover best practices for data migration, overcoming common challenges, and integrating SaaS and third-party solutions using key Google Cloud services like BigQuery, Data Fusion, Cloud Storage, Application Integration, Cloud Run, Cloud Build, and Artifact Registry.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Learn how a team of developers built a serverless toolkit with Cloud Run to simplify application development and deployment. This session shares best practices from Shopify for creating a robust toolkit that empowers developers to seamlessly ship serverless applications while integrating with essential Google Cloud services and adhering to security best practices. Discover how to enhance scalability, reduce toil, and boost productivity.

Build modern applications with the power of Oracle Database 23ai, and Google Cloud's Vertex AI and Gemini Foundation models. Learn key strategies to integrate Google Cloud’s native development tools and services, including Kubernetes, Cloud Run, and BigQuery, with Oracle Database 23ai and Autonomous Database, seamlessly into modern application architectures. Cloud architects, Developers, or DB Administrators will gain actionable insight, best practices, and real-world examples to enhance performance and accelerate innovation with ODB@GC.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

See how a small team can leverage Google Cloud to serve millions. As the former lead of Flutter and mobile-specialist, I started with little knowledge of Cloud. Using Google's Dart and Google Cloud, we've built a successful service across Cloud Run, Compute Engine, Big Query, Storage, CDN and more and here to share our learnings.

This hands-on lab guides you through building a captivating generative AI application using the Gemini API in Vertex AI. You'll leverage the Streamlit framework to create an interactive interface for generating stories, providing a seamless user experience. After testing your application locally in Cloud Shell, you'll deploy it to Cloud Run for scalable and reliable serving. This hands-on experience equips you with the skills to integrate Gemini with user interfaces and efficiently deploy your AI applications.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Dive into the world of serverless GPUs with Cloud Run. This talk explores how Cloud Run delivers on-demand GPUs with unprecedented flexibility and cost efficiency. Learn how you can achieve optimal performance and resource utilization with rapid autoscaling and scaling to zero. And discover how Cloud Run GPUs can help you build AI inference apps with open models, high-performance computing, graphics rendering, and much more.

Developers love Cloud Run. In this demo-driven talk, you’ll discover why Cloud Run offers simplicity alongside flexibility for running your code. We’ll begin with a couple of basic getting-started concepts. Then we’ll go into “How do I” scenarios that cover every feature from Virtual Private Cloud (VPC) access to startup probes. Too much info? We’ll have codelabs for you to do at your own pace.

Transform your AI research into real-world applications with Google’s latest tools. This session explores the seamless integration of the Gemini API, Google AI Studio, Gemma, and Kaggle to accelerate your development workflow. Learn how to build and prototype models effortlessly, leverage lightweight open models, and collaborate with a thriving community. Discover how to deploy your experiments in production using Cloud Run and Vertex AI. Join us to bridge the gap between research and reality with Google AI.

This hands-on lab equips you with the practical skills to build and deploy a real-world AI-powered chat application leveraging the Gemini LLM APIs. You'll learn to containerize your application using Cloud Build, deploy it seamlessly to Cloud Run, and explore how to interact with the Gemini LLM to generate insightful responses. This hands-on experience will provide you with a solid foundation for developing engaging and interactive conversational applications.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!