talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

234

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Google Cloud Next '25 ×

Simplify blockchain development with generative AI on Google Cloud. In this interactive session, you’ll learn how Gemini AI helps generate queries for BigQuery blockchain datasets and analyzes real-time blockchain data. See how Blockscope is using Gemini to conduct forensic analysis of blockchain data. Live demos will show you how to supercharge your Web3 projects, whether you're a blockchain veteran or just starting out.

In this hands-on lab, you'll explore the power of BigQuery Machine Learning with remote models like Gemini Pro to analyze customer reviews. Learn to extract keywords, assess sentiment, and generate insightful reports using SQL queries. Discover how to integrate Gemini Pro Vision to summarize and extract keywords from review images. By the end, you’ll gain skills in setting up Cloud resources, creating datasets, and prompting Gemini models to drive actionable insights and automated responses to customer feedback.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Learn how Salesforce and Google are partnering to deliver new features to customers through Agentforce, powered by Google AI. Speakers will discuss the upcoming availability of Gemini within Agentforce, giving customers flexibility in choosing models to drive Agents and Actions. They will also cover new capabilities within Agentforce, powered by Google services, which provide improved natural conversational experiences and real-time integration with Google Maps, Travel, and Weather.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Gemini 2.0 represents a significant leap forward in image understanding. Its object detection capabilities are dramatically faster than anything before, enabling near-instantaneous identification of visual elements. Combined with Gemini's advanced reasoning and access to external tools, this speed unlocks a vast range of new applications and possibilities, from rapid image search to complex visual problem-solving. Critically, Gemini 2.0 also possesses an experimental capacity for 3D scene understanding, allowing it to interpret spatial relationships and depth unlocking a wealth of new possibilities across diverse domains.

This talk will discuss how to efficiently set up and deploy Google Cloud TPUs for training AI and LLM workloads. Using scripts and existing tools to automate the process, you'll learn how to quickly build a fully functional TPU training environment. Additionally, the session offers insights on optimizing your cloud environment for maximum training efficiency, accelerating your AI and LLM initiatives.

Good customer experiences are a crucial factor in business growth and efficiency. Learn how to deliver exceptional customer outcomes with Google's Customer Engagement Suite (CES). Powered by Gemini, CES combines advanced conversational AI with multimodal, omnichannel capabilities to enable faster, more personalized digital experiences. This session explores CES value for customers and partners, go-to-market strategies, and real-world success stories.

This session showcases how Gemini Code Assist revolutionizes end-to-end Java application development. Join us to learn how to accelerate each development stage – from backend to frontend and testing. Discover how to leverage Gemini code generation, completion, and debugging features. Explore how to enhance productivity and build robust, high-quality applications faster. And take away practical methods and techniques for integrating Gemini Code Assist into your workflow.

Cloud Run is an ideal platform for hosting AI applications – for example, you can use Cloud Run with AI frameworks like LangChain or Firebase Genkit to orchestrate calls to AI models on Vertex AI, vector databases, and other APIs. In this session, we’ll dive deep into building AI agents on Cloud Run to solve complex tasks and explore several techniques, including tool calling, multi-agent systems, memory state management, and code execution. We’ll showcase interactive examples using popular frameworks.

session
by Christopher Cho (Google Cloud) , Eric Dong (Google Cloud)

Want to build your own Gemini-powered applications? Bring your laptops. This technical session provides the interactive experience you need. We’ll guide you through practical examples and code snippets, and cover how to start creating with the Gemini SDK on Vertex AI.

To achieve your business goals while staying within the budget, it’s crucial to have complete visibility into your cloud spending, down to the specific applications and workloads. You need to know exactly where your money is going and how efficiently resources are being used, as well as opportunities to optimize spending. In this session, we’ll show you how to gain such knowledge and insights about your applications. We’ll explore how to use product dashboards and Gemini Cloud Assist to identify optimization opportunities. Leave with actionable strategies to maximize your cloud return on investment (ROI) and achieve your business goals.

Model Armor is designed to protect your organization’s AI applications from security and safety risks. In this session, we’ll explore how Model Armor acts as a crucial layer of defense, screening both prompts and responses to identify and mitigate threats such as prompt injections, sensitive data leakage, and offensive content. Whether you’re a developer looking to implement AI safety or a professional interested in better visibility into AI applications, Model Armor offers comprehensive yet flexible security across all of your large language model (LLM) applications.

This session offers a technical deep dive of the state-of-art AlloyDB AI capabilities for building highly accurate and relevant generative AI applications using real-time data. We’ll cover vector search using Google Research’s ScaNN index technology and cover how you can utilize Gemini from AlloyDB operators to seamlessly integrate into your application. Discover AlloyDB AI natural language feature, a new way to interact with databases and how it accurately and securely answers your questions. Also learn about the latest research between Google and NVIDIA on GPU-accelerated vector index builds in databases.

How can organizations create scalable, engaging communications that capture attention and break through all of the noise? In the session, explore how the newest Google Workspace app is empowering teams of all sizes to build dynamic video content with Gemini to connect with coworkers and partners in new ways and break through the noise. Explore the latest customer insights, experience Vids in action, and discover what’s next.

"Talk-to-Manuals" leverages Google Cloud's Vertex AI Search and Conversation, along with a multi-turn GenAI agent, to provide customers with easy access to vehicle information. The solution integrates and indexes diverse data sources—including unstructured PDFs and structured data from FAQs—for efficient retrieval. Semantic search, powered by LLMs, understands user intent, delivering highly relevant results beyond keyword matching. The interactive multi-turn conversation guides users towards precise information, refining queries and increasing accuracy. Data filtering based on vehicle attributes (model, year, engine type) combined with sophisticated GenAI-powered post-processing ensures tailored results. The successful implementation of "Talk-to-Manuals" significantly improved customer service by providing a more accessible and intuitive way for customers to find information.

This meetup is a space for developers actively working with any open-source AI libraries, frameworks, or tools, to share their projects, challenges, and solutions. Whether you're building with LangChain, Haystack, Transformers, TensorFlow, PyTorch, or any other open-source AI tool, we want to hear from you. This meetup will provide an opportunity to connect with other developers, share practical tips, and get inspired to build even more with open-source AI on Google Cloud. Come ready to contribute, and let's learn from each other!

Unleash the power of Gemini with Vertex AI Studio. This hands-on lab guides you through using Gemini for image analysis, prompt engineering, and conversational AI, all within a user-friendly interface. Learn to design prompts and generate content directly from the Google Cloud console.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Concerned about AI hallucinations? While AI can be a valuable resource, it sometimes generates inaccurate, outdated, or overly general responses - a phenomenon known as "hallucination." This hands-on lab teaches you how to implement a Retrieval Augmented Generation (RAG) pipeline to address this issue. RAG improves large language models (LLMs) like Gemini by grounding their output in contextually relevant information from a specific dataset. Learn to generate embeddings, search vector space, and augment answers for more reliable results.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Join this session to discover how a phone plan selection app, built with Flutter and Firebase, leverages Gemini 2.0 to enhance and simplify the customer experience. Gain insights into the technical architecture, identify actionable strategies to implement similar AI-driven solutions in your own apps, and understand the key principles of using AI to enhance the customer experience.

session
by Ed Olson-Morgan (Marsh McLennan) , Swarup Pogalur (Wells Fargo) , Geir Sjurseth (Google Cloud) , Antony Arul (Google Cloud)

Organizations are racing to deploy generative AI solutions built on large language models (LLMs), but struggle with management, security, and scalability. Apigee is here to help. Join us to discover how the latest Apigee updates enable you to manage and scale gen AI at the enterprise level. Learn from Google’s own experience and our work with leading customers to address the challenges of productionizing gen AI.