Comment feriez-vous pour créer Elasticsearch si vous commenciez ce projet en 2025 ? Découpler le calcul (compute) du stockage (storage); Externaliser la gestion de la persistence et la réplication à un blob store comme S3, Google Cloud Storage ou encore Azure Blob Storage; Dynamiquement ajouter ou supprimer des instances; Avoir les bonnes valeurs par défaut; Et un chemin hyper clair et fluide pour les développeurs. C'est exactement ce que nous avons fait avec Elastic Serverless. Lors de cette session, vous allez découvrir comment nous avons re-conçu Elasticsearch pour lui permettre d'en faire davantage avec une architecture Stateless qui peut exécuter des requêtes sur un espace de stockage froid (cold storage).
talk-data.com
Topic
serverless
57
tagged
Activity Trend
Top Events
KAI est une solution de datacleaning de données non structurées, basé notamment sur la recherche vectorielle d'Elasticsearch et son mode BBQ, récemment lancé. Stéphane NGO, CEO, k-ai expliquera pourquoi il est passé de la solution Cloud Hosted d'Elastic à sa version Serverless et quels sont les observations qu'il peut en tirer.
What happens when you treat AI like a coding partner? In this talk, I’ll share how I used AI tools to build the London Improv Calendar - a fully serverless application on AWS (Lambda, DynamoDB, API Gateway, and more). This isn’t theory; it’s a practical, in-the-trenches account of working with AI as a solo engineer. We’ll cover what worked well, what fell flat, and where AI truly accelerated development. If you’re curious about pairing AI with serverless, or just want some real-world lessons you can apply to your own projects, this session is for you.
Most frameworks promise deploy anywhere, but usually only for HTTP routes. The moment you add WebSockets, queues, or message buses, things get messy fast. Join me for a talk on how to build backend platforms that are truly platform-agnostic: able to run anywhere, from a single process to multiple VMs to fully serverless in the cloud — and switch between them in minutes. We’ll dive into writing WebSocket code that can run serverless, building type-safe queues, and see live examples of this philosophy in action with Pikku.dev.
Migrating from AWS or Azure to Google Cloud runtimes can feel like navigating a maze of complex services and dependencies. In this session, we’ll explore key considerations for migrating legacy applications, emphasizing the “why not modernize?” approach with a practice guide. We’ll share real-world examples of successful transformations. And we’ll go beyond theory with a live product demo that showcases migration tools, and a code assessment demo powered by Gemini that demonstrates how you can understand and modernize legacy code.
JavaScript gets a lot of flak for not being strongly typed. But if you’re running JavaScript in production today, you don’t need to wait for runtime errors to catch problems. TypeScript has taken JavaScript from a loosely typed language, where a variable can change from a string to a number without warning, and made it strongly typed. Now Zod and Effect are here to tame even the wildest unknown parameters from your users. We’ll demonstrate using these tools in an application and we’ll deploy that application to Google Cloud.
Leverage the flexibility of Cloud Run and its ease of use for your Apache Kafka workloads. In this session, we’ll introduce Cloud Run worker pools, a new resource specifically designed for non-request-based workloads, like Kafka consumers. Learn how worker pools, along with a self-hosted Kafka autoscaler, can enable fast and flexible scaling of your Kafka consumers by using Kafka queue metrics.
There are cases when you can’t use Google Cloud services but still want to get all benefits of AlloyDB integration with AI and serve a local model directly to the database. In such cases, AlloyDB Omni deployed in a Kubernetes cluster can be great solution, serving for edge cases and keeping all communications between database and AI model local.
Cloud Run is an ideal platform for hosting AI applications – for example, you can use Cloud Run with AI frameworks like LangChain or Firebase Genkit to orchestrate calls to AI models on Vertex AI, vector databases, and other APIs. In this session, we’ll dive deep into building AI agents on Cloud Run to solve complex tasks and explore several techniques, including tool calling, multi-agent systems, memory state management, and code execution. We’ll showcase interactive examples using popular frameworks.
Madhive built their ad analytics and bidding infrastructure using databases and batch pipelines. When the pipeline lag got too long to bid effectively, they rebuilt from scratch with Google Cloud’s Managed Service for Apache Kafka. Join this session to learn about Madhive’s journey and dive deep into how the service works, how it can help you build streaming systems quickly and securely, and what migration looks like. This session is relevant for Kafka administrators and architects building event-sourcing platforms or event-driven systems.
Join us to discuss serverless computing and event-driven architectures with Cloud Run functions. Learn a quick and secure way to connect services and build event-driven architectures with multiple trigger types (HTTP, Pub/Sub, and Eventarc). And get introduced to Eventarc Advanced, centralized access control to your events with support for cross-project delivery.
Join this session to discover how a phone plan selection app, built with Flutter and Firebase, leverages Gemini 2.0 to enhance and simplify the customer experience. Gain insights into the technical architecture, identify actionable strategies to implement similar AI-driven solutions in your own apps, and understand the key principles of using AI to enhance the customer experience.
Build more capable and reliable AI systems by combining context-aware retrieval-augmented generation (RAG) with agentic decision-making in an enterprise AI platform, all in Java! This session covers everything from architecture, context construction, and model routing to action planning, dynamic retrieval, and recursive reasoning, as well as the implementation of essential guardrails and monitoring systems for safe deployments. Learn about best practices, trade-offs, performance, and advanced techniques like evaluations and model context protocol.
This session dives into the latest advancements in securing and managing your Cloud Run workloads at enterprise scale. Join us to learn about new features and techniques to meet the highest security standards, strategies for managing large-scale deployments, and solutions to common issues like IP exhaustion. Plus, one of our customers will share their firsthand experience managing a massive fleet of Cloud Run workloads.
Join us for an interactive session where we’ll build, deploy, and scale inference apps. Imagine creating and launching generative AI apps that deliver personalized recommendations and stunning images, all with the unparalleled efficiency and scalability of serverless computing. You’ll learn how to build gen AI apps effortlessly using Gemini Code Assist; deploy gen AI apps in minutes on Cloud Run, using Vertex AI or on-demand, scale-to-zero serverless GPUs; and optimize the performance and cost of AI workloads by implementing best practices.
This session dives into the world of on-demand Apache Spark on Google Cloud. We explore its native integration with BigQuery, its new capabilities and the benefits of using Spark for AI and machine learning (ML) workloads. We’ll discuss why Spark is a good choice for large-scale data processing, distributed training, and distributed inferencing. We’ll learn from Trivago about how they leveraged the Spark and BigQuery together to simplify their AI and ML workflows.
Discover how Elastic Cloud Serverless and Google Vertex AI empower the creation of AI-driven search applications with effortless scalability. This session explores Elastic's intuitive serverless architecture and dynamic scaling, integrating with Google Vertex AI to create world class search experiences. Learn how this powerful partnership simplifies deployments and accelerates innovation for modern search, observability, and security workloads.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Leveraging real-time data in AI and machine learning (ML) can give you a competitive edge. This session explores how Shopify and Palo Alto Networks leverage real-time data and AI with BigQuery and Dataflow ML to transform customer experiences and drive innovation. Discover how these companies collect, process, and analyze real-time data to achieve significant business outcomes, and learn how to apply similar strategies in your organization.
Learn how a team of developers built a serverless toolkit with Cloud Run to simplify application development and deployment. This session shares best practices from Shopify for creating a robust toolkit that empowers developers to seamlessly ship serverless applications while integrating with essential Google Cloud services and adhering to security best practices. Discover how to enhance scalability, reduce toil, and boost productivity.
Geographical redundancy is a key pillar of a resilient data architecture. With BigQuery cross-region dataset replication and managed disaster recovery, you can ensure your mission-critical apps remain available even in the unlikely event of a region-level infrastructure outage. Learn how this built-in capability protects your data and workloads against regional outages and ensures uninterrupted data access for your organization.