talk-data.com talk-data.com

Topic

interest-app-dev

253

tagged

Activity Trend

170 peak/qtr
2020-Q1 2026-Q1

Activities

253 activities · Newest first

Bring your laptop and join us for an interactive demo on how to apply large language models (LLMs) from the Vertex AI Model Garden to a business use case, and learn about best practices for monitoring these models in production. We’ll go through an exercise using Colab Enterprise notebooks and learn how to use out-of-the-box tools to monitor RED (rate, error, duration) metrics, configure alerts, and monitor the rate of successful predictions in order to ensure successful use of a Vertex AI model in production.

Firestore with MongoDB compatibility is a serverless database service  designed to maximize scalability, high availability and performance without the hidden costs of capacity planning. This session demonstrates the new Firestore with MongoDB compatibility capabilities and discusses how Dialpad has built an Ai-powered customer communications platform leveraging Firestore over the last 14 years to grow a successful, performant, reliable business.

Learn how to manage security controls and licenses for thousands of users, and tie it all together with APIs. We’ll show you ways to manage developer access more efficiently, build custom management integrations, and keep your CISO happy at the same time. We’ll also demo the new Gemini Code Assist integration with Apigee, which lets developers use Gemini Code Assist chat to generate context-aware OpenAPI specifications that reuse components from other APIs in their organization for efficiency and reference organizational security standards.

Join us for an in-depth session on Firebase Genkit, an open source framework that simplifies the development of AI-powered applications. Discover how to use the Node.js and Go SDKs to build intelligent chatbots, multimodal content generators, streamlined automation workflows, and agentive experiences. We'll demonstrate how Genkit's unified interface seamlessly integrates Google's Gemini and Imagen models, self-hosted Ollama options, and a variety of popular models from Vertex AI Model Garden. 

Migrating from AWS or Azure to Google Cloud runtimes can feel like navigating a maze of complex services and dependencies. In this session, we’ll explore key considerations for migrating legacy applications, emphasizing the “why not modernize?” approach with a practice guide. We’ll share real-world examples of successful transformations. And we’ll go beyond theory with a live product demo that showcases migration tools, and a code assessment demo powered by Gemini that demonstrates how you can understand and modernize legacy code.

JavaScript gets a lot of flak for not being strongly typed. But if you’re running JavaScript in production today, you don’t need to wait for runtime errors to catch problems. TypeScript has taken JavaScript from a loosely typed language, where a variable can change from a string to a number without warning, and made it strongly typed. Now Zod and Effect are here to tame even the wildest unknown parameters from your users. We’ll demonstrate using these tools in an application and we’ll deploy that application to Google Cloud.

Debug Google Kubernetes Engine (GKE) apps like a pro! This hands-on lab covers using Cloud Logging & Monitoring to detect, diagnose, and resolve issues in a microservices application deployed on GKE. Learn practical troubleshooting workflows.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Tired of generic code suggestions? Learn how to customize Gemini Code Assist using your source code repositories. This session covers best practices for generating new code, and retrieving and reusing existing code, with Gemini Code Assist code-customization capabilities. Boost productivity, enforce consistency, and reduce cognitive load with a truly personalized AI coding assistant.

Unlock the power of generative AI with retrieval augmented generation (RAG) on Google Cloud. In this session, we’ll navigate key architectural decisions to deploy and run RAG apps: from model and app hosting to data ingestion and vector store choice. We’ll cover reference architecture options – from an easy-to-deploy approach with Vertex AI RAG Engine, to a fully managed solution on Vertex AI, to a flexible DIY topology with Google Kubernetes Engine and open source tools – and compare trade-offs between operational simplicity and granular control.

Discover how some of the world’s most innovative companies modernized and transformed their applications with the power of Firestore, Firebase, and cutting-edge generative AI. Learn how they leveraged the latest technologies, such as edge computing and AI, to enhance customer experiences at every stage of the customer journey. Explore their innovative architecture and gain insights into building modern, engaging applications that deliver exceptional customer experiences.

This session explores the evolution of data management on Kubernetes for AI and machine learning (ML) workloads and modern databases, including Google’s leadership in this space. We’ll discuss key challenges and solutions, including persistent storage with solutions like checkpointing and Cloud Storage FUSE, and accelerating data access with caching. Customers Qdrant and Codeway will share how they’ve successfully leveraged these technologies to improve their AI, ML, and database performance on Google Kubernetes Engine (GKE).

The rise of AI-powered code generation tools presents a compelling alternative to traditional UI prototyping frameworks. This talk explores the question: Is it time to ditch the framework overhead and embrace core web technologies (such as HTML, CSS, JavaScript) for faster, more flexible prototyping? We’ll examine the trade-offs between structured frameworks and the granular control offered by a “bare metal” approach, augmented by AI assistance. Learn when leveraging AI with core tech becomes the smarter choice, enabling rapid iteration and bespoke UI designs, and when frameworks still reign supreme.

Did you know that GitHub Copilot lets you use Google Gemini as an AI programming assistant? Learn tips and tricks of prompting, shaping the context space, injecting third-party knowledge sources, and other ways that GitHub developers maximize their (and their team's) use of Gemini in VS Code and other IDEs.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Experience the power of AlloyDB Omni, a cutting-edge PostgreSQL-compatible database designed for multicloud and hybrid cloud environments. This session explores how AlloyDB Omni accelerates the development of modern applications, enabling generative AI experiences, efficient vector search, real-time operational analytics, and scalable transactional performance. We’ll also showcase how to run your applications on multiple clouds using Aiven’s seamless managed service, and how to supercharge hybrid cloud deployments with cloud-ready partners.

Are you a site reliability engineer (SRE) for an organization running generative AI workloads? If gen AI is transforming your workloads, are your SRE skills keeping pace? This session is a must for SREs facing the unique challenges of gen AI. Learn to adapt the four golden signals – tackling latency in multistage pipelines, user satisfaction in nondeterministic systems, and new error types like hallucinations. Discover how Google Cloud Observability and Firebase Genkit AI monitoring can help you master gen AI SRE.

Many organizations are scrambling to adopt Aritificial Intelligence tools across their teams, and like any new technology rollout, they are encountering challenges- both expected and unexpected. CME Group recently rolled out Gemini Code Assist to one of their large software development organizations and are excited to share takeaways around people, process, and tools. The topics include: compliance and information security considerations, 
managing rollout and adoption: starting small and scaling, and how these tools can help reshape the workday of your teams. No matter where you are in your AI adoption journey, you're sure to learn something new!

Are you ready to get hands-on with Google Cloud’s AI tools? In this 2 hour gHack, you will work in teams of 4. Together you will build a Formula E Race Analysis System from scratch using a variety of our AI and Data tools. Teams will work together to build the solution by searching, learning and collaborating together to find the answers needed. 3-2-1 lights out and away we go!

Are you ready to get hands-on with Google Cloud’s AI tools? In this 2 hour gHack, you will work in teams of 4. Together you will build a Formula E Race Analysis System from scratch using a variety of our AI and Data tools. Teams will work together to build the solution by searching, learning and collaborating together to find the answers needed. 3-2-1 lights out and away we go!

In an era where Agentic AI dominates headlines, business leaders need clarity on its transformative potential beyond the hype. This session cuts through the buzz to showcase how enterprises are leveraging AI agents to revolutionize operations, from customer service to IT operations. Through real-world examples and proven frameworks, learn how to identify immediate opportunities, implement strategic solutions, and build a roadmap for long-term success. Leave with actionable insights to transform your organization from AI-aware to AI-driven.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Leverage the flexibility of Cloud Run and its ease of use for your Apache Kafka workloads. In this session, we’ll introduce Cloud Run worker pools, a new resource specifically designed for non-request-based workloads, like Kafka consumers. Learn how worker pools, along with a self-hosted Kafka autoscaler, can enable fast and flexible scaling of your Kafka consumers by using Kafka queue metrics.