The rise of AI demands an easier and more efficient approach to data management. Discover how small IT teams are transforming their data foundations with BigQuery to support AI-powered use cases across all data types – from structured data to unstructured data like images and text (multimodal). Learn from peers across industries and geographies why they migrated to BigQuery and how it helped them accelerate time to insights, reduce data management complexity, and unlock the full potential of AI.
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Build more capable and reliable AI systems by combining context-aware retrieval-augmented generation (RAG) with agentic decision-making in an enterprise AI platform, all in Java! This session covers everything from architecture, context construction, and model routing to action planning, dynamic retrieval, and recursive reasoning, as well as the implementation of essential guardrails and monitoring systems for safe deployments. Learn about best practices, trade-offs, performance, and advanced techniques like evaluations and model context protocol.
Organizations are facing sophisticated and growing cyberthreats from commercially driven ransomware groups and state-sponsored adversaries that legacy productivity solutions struggle to defend against. Learn how Google’s multilayered approach – AI threat defenses, a reduced attack surface, and built-in security controls in Google Workspace, Chrome Enterprise, and ChromeOS – can effectively combat the most prevalent cyberthreats and block malicious actors from gaining access to your data and disrupting your business.
Audiences around the world have almost limitless access to content that’s only a click, swipe, or voice command away. Companies are embracing cloud capabilities to evolve from traditional media companies into media-tech and media-AI companies. Join us to discover how the cloud is maximizing personalization and monetization to enable the next generation of AI-powered streaming experiences for audiences everywhere.
Get the inside story of Yahoo’s data lake transformation. As a Hadoop pioneer, Yahoo’s move to Google Cloud is a significant shift in data strategy. Explore the business drivers behind this transformation, technical hurdles encountered, and strategic partnership with Google Cloud that enabled a seamless migration. We’ll uncover key lessons, best practices for data lake modernization, and how Yahoo is using BigQuery, Dataproc, Pub/Sub, and other services to drive business value, enhance operational efficiency, and fuel their AI initiatives.
In this session, we’ll explore Google’s latest developments in Google Kubernetes Engine (GKE) that enable unprecedented scale and performance for AI workloads. We’ll dive into how Anthropic leverages these capabilities to manage mega-scale Kubernetes clusters, orchestrate diverse workloads, and achieve breakthrough efficiency optimizations.
Generative AI agents have emerged as the leading architecture for implementing complex application functionality. Tools are the way that agents access the data and systems they need. But building and deploying tools at scale brings new challenges. Learn how MCP Toolbox for Databases, an open source server for gen AI tool management, enables platforms like LangGraph and Vertex AI to easily connect to enterprise databases.
Join us for an interactive session where we’ll build, deploy, and scale inference apps. Imagine creating and launching generative AI apps that deliver personalized recommendations and stunning images, all with the unparalleled efficiency and scalability of serverless computing. You’ll learn how to build gen AI apps effortlessly using Gemini Code Assist; deploy gen AI apps in minutes on Cloud Run, using Vertex AI or on-demand, scale-to-zero serverless GPUs; and optimize the performance and cost of AI workloads by implementing best practices.
Learn how to evaluate and optimize the impact of AI-assisted software development with Gemini Code Assist. This session covers processes for measuring AI-assistance effectiveness, exploring quantitative and qualitative measures available with Gemini Code Assist, and integrating with Cloud Monitoring and Cloud Logging. Discover how to leverage DevOps Research and Assessment (DORA) metrics to track productivity gains. Whether you’re a developer, team lead, architect, or IT manager, you’ll gain insights into measuring the impact of AI assistance.
AI-enabled browser agents are in the news now, but it’s not always clear how they solve real-world problems. In this session, we’ll share our experience building a web browser agent by integrating Gemini into an end-to-end service that follows text instructions to take actions in a web application. We’ll take you through our journey of creating the agent, share the research that inspired us, and show how we’ve used the system to tackle practical problems like validating user flows in the UI and semantically checking web links.
Imagine starting your workday with AI agents at your fingertips, from right within your browser. Join us in this session to learn how Google Agentspace, seamlessly integrated with Chrome and NotebookLM, can change the way you approach work. Think of automating your personal workflows, accessing critical information instantly, and working in unison with AI – all within your familiar browser environment.
Is your platform ready for the scale of rapidly evolving models and agents? In this session, we’ll explore strategies for scaling your cloud native AI platform - empowering teams to leverage an increasing variety of AI models and agent frameworks. We’ll dive into tools and practices for maintaining control and cost efficiency while enabling AI engineering teams to quickly iterate on Google Kubernetes Engine (GKE). We’ll explore how NVIDIA NIM microservices deliver optimized inference with minimal tuning.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
According to Andreessen Horowitz, 93% of Fortune 500 companies surveyed currently use three or more AI model providers, and now AI agents will proliferate and become standard. With the massive shift in enterprise AI adoption and the pace of change, enterprises need to modernize their network and security to unlock the value that AI generates and keep pace with rapid changes. Learn how to make your network and security AI-ready, plus discover new innovations to reduce infrastructure costs, improve user experience, enable faster developer velocity, and protect your AI infrastructure from new security threats.
This talk demonstrates a fashion app that leverages the power of AlloyDB, Google Cloud’s fully managed PostgreSQL-compatible database, to provide users with intelligent recommendations for matching outfits. User-uploaded data of their clothes triggers a styling insight on how to pair the outfit with matching real-time fashion advice. This is enabled through an intuitive contextual search (vector search) powered by AlloyDB and Google’s ScaNN index to deliver faster vector search results, low-latency querying, and response times. While we’re at it, we’ll showcase the power of the AlloyDB columnar engine on joins required by the application to generate style recommendations. To complete the experience, we’ll engage the Vertex AI Gemini API package from Spring and LangChain4j integrations for generative recommendations and a visual representation of the personalized style. This entire application is built on a Java Spring Boot framework and deployed serverlessly on Cloud Run, ensuring scalability and cost efficiency. This talk explores how these technologies work together to create a dynamic and engaging fashion experience.
Ever wondered what your work interface with AI agents will look like? Join us to learn about the latest advancements in Google Agentspace – your launch point for enterprise-ready AI agents. In this session, we’ll cover the exciting new capabilities and ready-to-use agents in Agentspace, and the new use cases enabled by it.
World models represent a paradigm shift in artificial intelligence, moving beyond passive data consumption to active, predictive understanding of environments. These models enable AI agents to simulate potential futures, plan strategically, and learn more efficiently in complex, dynamic scenarios. In this session, Tim Brooks, Research Scientist at Google DeepMind, will explore the current state of world model research and illuminate the exciting frontiers that lie ahead.
Join us as we explore the intersection of computer use capabilities and the broader challenge of building effective AI agents. We'll examine how these developments map to the core requirements of AI agents: the ability to plan through strong reasoning and contextual understanding, act via direct computer interaction and tool use, and reflect through learning from experience. We'll share insights from recent implementations, and explore how emerging capabilities in areas like computer use are shaping the development of more capable AI systems.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Discover the power of Google Cloud instances running on the latest Intel Xeon processors. This course will introduce you to Intel’s optimization tools, designed to help you manage and optimize your infrastructure with unmatched efficiency and performance. Learn how to leverage these cutting-edge technologies to enhance your cloud computing capabilities and drive your business forward.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Move your generative AI projects from proof of concept to production. In this interactive session, you’ll learn how to automate key AI lifecycle processes—evaluation, serving, and RAG—to accelerate your real-world impact. Get hands-on advice from innovative startups and gain practical strategies for streamlining workflows and boosting performance.
Learn how LG AI Research uses Google Cloud AI Hypercomputer to build their EXAONE family of LLMs and innovative Agentic AI experiences based the models. EXAONE 3.5, class of bilingual models that can learn and understand both Korean and English, recorded world-class performance in Korean. The collaboration between LG AI Research and Google Cloud enabled LG to significantly enhance model performance, reduce inference time, and improve resource efficiency through Google Cloud's easy-to-use scalable infrastructure