talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

1405

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q2

Activities

1405 activities · Newest first

Many organizations are scrambling to adopt Aritificial Intelligence tools across their teams, and like any new technology rollout, they are encountering challenges- both expected and unexpected. CME Group recently rolled out Gemini Code Assist to one of their large software development organizations and are excited to share takeaways around people, process, and tools. The topics include: compliance and information security considerations, 
managing rollout and adoption: starting small and scaling, and how these tools can help reshape the workday of your teams. No matter where you are in your AI adoption journey, you're sure to learn something new!

This hands-on lab introduces Gemini 2.0 Flash, the powerful new multimodal AI model from Google DeepMind, available through the Gemini API in Vertex AI. You'll explore its significantly improved speed, performance, and quality while learning to leverage its capabilities for tasks like text and code generation, multimodal data processing, and function calling. The lab also covers advanced features such as asynchronous methods, system instructions, controlled generation, safety settings, grounding with Google Search, and token counting.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

This session explores building sensitive data protection directly into Retrieval-Augmented Generation (RAG) architectures. We'll demonstrate how to leverage Cloud Data Loss Prevention (Cloud DLP) and the Faker Library to anonymize sensitive data within the RAG pipeline. The session will cover techniques for reversible transformations using Memorystore and Firestore for data mapping, and discuss integrating these methods with Large Language Models (LLMs) like Gemini via LangChain and Vertex AI Search. Learn how to create secure and compliant AI solutions that protect sensitive data and adhere to regulations like the EU AI Act.

What if building an AI agent that thinks, reasons, and acts autonomously took less time than your coffee break? With Vertex AI, it’s not just possible. It’s easy. Join DoiT to see how to build, deploy, and scale a production-ready AI agent in 10 minutes using Google’s top services: Gemini 2 for language understanding, the RAG Engine for fetching information, and the Agent Engine for orchestration. To top it off, watch a live demo take an agent from concept to production-ready in real time.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

This session shows how engineers can use Gemini Cloud Assist and Gemini Code Assist to speed up the software development life cycle (SDLC) and improve service quality. You’ll learn how to shorten release cycles; improve delivery quality with best practices and generated code, including tests and infrastructure as code (IaC); and gain end-to-end visibility into service setup, consumption, cost, and observability. In a live demo, we’ll showcase the integrated flow and highlight code generation with GitLab and Jira integration. And we’ll show how Gemini Cloud Assist provides deeper service-quality insights.

Gemini 2.0 Flash Thinking unlocks a critical new reasoning step in the model execution process that is needed to continue hill climbing on the most difficult problems. Join Jack Rae for a deep dive on our latest thinking models, how to use them to their full capability as a developer, interesting use cases to explore with reasoning, and where we are going next with reasoning.

session
by Kaushik Bhandankar (Google Cloud) , Chee Kin Loh (Centre for Strategic Infocomm Technologies) , Rohan Grover (Google Cloud)

Organizations with strict data residency requirements often struggle to leverage AI and the latest in cloud innovations on-premises. Learn how to architect gen AI optimized applications for success using LLMs, cloud infrastructure, and data on-premises without compromising on data sovereignty, security, or latency in this technical deep dive session.

Simplify blockchain development with generative AI on Google Cloud. In this interactive session, you’ll learn how Gemini AI helps generate queries for BigQuery blockchain datasets and analyzes real-time blockchain data. See how Blockscope is using Gemini to conduct forensic analysis of blockchain data. Live demos will show you how to supercharge your Web3 projects, whether you're a blockchain veteran or just starting out.

In this hands-on lab, you'll explore the power of BigQuery Machine Learning with remote models like Gemini Pro to analyze customer reviews. Learn to extract keywords, assess sentiment, and generate insightful reports using SQL queries. Discover how to integrate Gemini Pro Vision to summarize and extract keywords from review images. By the end, you’ll gain skills in setting up Cloud resources, creating datasets, and prompting Gemini models to drive actionable insights and automated responses to customer feedback.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Learn how Salesforce and Google are partnering to deliver new features to customers through Agentforce, powered by Google AI. Speakers will discuss the upcoming availability of Gemini within Agentforce, giving customers flexibility in choosing models to drive Agents and Actions. They will also cover new capabilities within Agentforce, powered by Google services, which provide improved natural conversational experiences and real-time integration with Google Maps, Travel, and Weather.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Gemini 2.0 represents a significant leap forward in image understanding. Its object detection capabilities are dramatically faster than anything before, enabling near-instantaneous identification of visual elements. Combined with Gemini's advanced reasoning and access to external tools, this speed unlocks a vast range of new applications and possibilities, from rapid image search to complex visual problem-solving. Critically, Gemini 2.0 also possesses an experimental capacity for 3D scene understanding, allowing it to interpret spatial relationships and depth unlocking a wealth of new possibilities across diverse domains.

This talk will discuss how to efficiently set up and deploy Google Cloud TPUs for training AI and LLM workloads. Using scripts and existing tools to automate the process, you'll learn how to quickly build a fully functional TPU training environment. Additionally, the session offers insights on optimizing your cloud environment for maximum training efficiency, accelerating your AI and LLM initiatives.

Good customer experiences are a crucial factor in business growth and efficiency. Learn how to deliver exceptional customer outcomes with Google's Customer Engagement Suite (CES). Powered by Gemini, CES combines advanced conversational AI with multimodal, omnichannel capabilities to enable faster, more personalized digital experiences. This session explores CES value for customers and partners, go-to-market strategies, and real-world success stories.

This session showcases how Gemini Code Assist revolutionizes end-to-end Java application development. Join us to learn how to accelerate each development stage – from backend to frontend and testing. Discover how to leverage Gemini code generation, completion, and debugging features. Explore how to enhance productivity and build robust, high-quality applications faster. And take away practical methods and techniques for integrating Gemini Code Assist into your workflow.

Cloud Run is an ideal platform for hosting AI applications – for example, you can use Cloud Run with AI frameworks like LangChain or Firebase Genkit to orchestrate calls to AI models on Vertex AI, vector databases, and other APIs. In this session, we’ll dive deep into building AI agents on Cloud Run to solve complex tasks and explore several techniques, including tool calling, multi-agent systems, memory state management, and code execution. We’ll showcase interactive examples using popular frameworks.

session
by Christopher Cho (Google Cloud) , Eric Dong (Google Cloud)

Want to build your own Gemini-powered applications? Bring your laptops. This technical session provides the interactive experience you need. We’ll guide you through practical examples and code snippets, and cover how to start creating with the Gemini SDK on Vertex AI.

To achieve your business goals while staying within the budget, it’s crucial to have complete visibility into your cloud spending, down to the specific applications and workloads. You need to know exactly where your money is going and how efficiently resources are being used, as well as opportunities to optimize spending. In this session, we’ll show you how to gain such knowledge and insights about your applications. We’ll explore how to use product dashboards and Gemini Cloud Assist to identify optimization opportunities. Leave with actionable strategies to maximize your cloud return on investment (ROI) and achieve your business goals.

Model Armor is designed to protect your organization’s AI applications from security and safety risks. In this session, we’ll explore how Model Armor acts as a crucial layer of defense, screening both prompts and responses to identify and mitigate threats such as prompt injections, sensitive data leakage, and offensive content. Whether you’re a developer looking to implement AI safety or a professional interested in better visibility into AI applications, Model Armor offers comprehensive yet flexible security across all of your large language model (LLM) applications.

This session offers a technical deep dive of the state-of-art AlloyDB AI capabilities for building highly accurate and relevant generative AI applications using real-time data. We’ll cover vector search using Google Research’s ScaNN index technology and cover how you can utilize Gemini from AlloyDB operators to seamlessly integrate into your application. Discover AlloyDB AI natural language feature, a new way to interact with databases and how it accurately and securely answers your questions. Also learn about the latest research between Google and NVIDIA on GPU-accelerated vector index builds in databases.