talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

1405

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

1405 activities · Newest first

Ready to level up your infrastructure, Google Kubernetes Engine, and networking skills with the power of Gemini? Join this session to learn how large language models work and how it applies to roles in infrastructure, DevOps, and networking.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Discover how approachable and versatile building custom generative AI solutions for Google Workspace can be with Google Apps Script. In this session, we‘ll explore how developers are creating innovative integrations between Gemini‘s powerful large language models and Google Workspace. Get real-world examples of how AI can enhance Google Workspace functionality, and learn about the potential of custom solutions to address unique needs. Get ready to be inspired – the evolution of leveraging AI within Google Workspace is just beginning.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Elevate developer productivity in our SRE-focused session, leveraging Gemini and GCP's native observability. Explore a GKE-deployed e-commerce demo, unveiling Gemini's prowess in assessment, troubleshooting, and suggesting improvements with native APIs.

Enhance team efficiency, saving time from the cloud console and within IDEs. Join us for a transformative experience.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

As generative AI applications mature, retrieval-augmented generation (RAG) has become popular for improving large language model-based apps. We expect teams to move beyond basic RAG to autonomous agents and generative loops. We'll set up a Weaviate vector database on Google Kubernetes Engine (GKE) and Gemini to showcase generative feedback loops.

After this session, a Google Cloud GKE user should be able to: - Deploy Weaviate open source on GKE - Set up a pipeline to ingest data from the Cloud Storage bucket - Query, RAG, and enhance the responses

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

You’re the master of this session – you choose the language, the database type, and the deployment platform. Thanks to Gemini, we’ll deliver a working application from scratch to Google Cloud. Plus, we’ll cover all the development steps: build, test, and deploy and you’ll discover how fast Gemini helps any developer or practitioner in a matter of a single breakout session. Come to learn how Workstation and Gemini can enhance your workforce for better and safer developments.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Experience the transformation. Learn how Gemini for Google Cloud uses AI to effortlessly manage Google Cloud. Get help with design, deployment, troubleshooting, and optimization directly within Google Cloud Console. Accelerate complex tasks, boost productivity, and stay focused on core business value. Gemini for Google Cloud offers goal-driven design, guided operations, and tailored optimizations, all with enterprise-grade security and privacy.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Want to get your everyday jobs done faster with expert-level help from Google Cloud’s latest generative AI assistant? Check out three things that you can try today to improve your productivity and up-level your operational game with Gemini in the Cloud Console at this session. We’ll cover advanced IT topics that can be made easier with Gemini in the console, and go beyond simple Q&A to build advanced, multiturn conversations that help enterprise-level IT admins augment their own expertise with Gemini.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

We’ll explore how you can leverage Gemini to improve productivity while using Datadog for monitoring your Cloud environment, and how you can simplify the Datadog Agent installation and maximize monitoring coverage for the entire fleet. This session will also show how your teams can use Gemini's assistance to simplify routine processes to set up observability. You’ll gain useful tips on how to use Gemini to address the most common observability challenges and also go behind the scenes to explore the large language models that power it.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Vertex AI offers a range of Gemini models, each with different capabilities. Learn how to pick the right Gemini model for your use case and how to get the best performance. First, we explain how to map the modalities and properties of each model to your use case and functional requirements. Then we provide an overview of prompt design principles and how to craft effective prompts based on business requirements and language engineering considerations. Finally, we present prompt engineering strategies to ensure robustness and scalability of your prompts. This session is ideal for developers and prompt engineers seeking to maximize the value of Gemini on Vertex AI.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Explore the capabilities of RoostGPT in our upcoming session, an innovative testing platform at the forefront of generative AI and Large Language Models (LLMs). RoostGPT is engineered to redefine test case generation, enabling the creation of detailed and accurate test suites without the need for manual effort. It facilitates the automated generation of Unit, API, Integration, and Functional tests, significantly increasing productivity and expanding test coverage. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, we’ll explore the art of writing effective, multi-turn AI prompts and how Gemini, our most capable AI model, can help you achieve more than you ever thought possible. We’ll cover how large language models (LLMs) work and how outputs are based on the quality of an input. We’ll share practical tips on what makes great prompts and best practices for drafting prompts in Gemini, Gmail, Docs, Sheets, and Slides. You’ll leave this session empowered to prompt like a pro.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Have you ever wondered what it takes to build a unique multicustomer generative AI experience? How would you secure it? Scale it? Where would you start? This is the story of how our ambitious two-pizza team built an interactive custom LLM-powered chatbot in only two weeks. We'll cover what we learned, what we'd do differently, and what’s changed.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Looking to bring artificial intelligence (AI) assistance to your development teams to increase their productivity? We’ll take you through the ways you can use Gemini in your code editor to expedite your application delivery. You’ll also learn how you can now customize Gemini with your own private codebase, with AI assistance deeply tailored to your organization’s libraries and coding conventions, and how Gemini can provide insights to help you understand its effects for your teams. We are joined by Capgemini who will share their own experiences of Gemini in coding.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Vertex AI offers capabilities to customize and tune foundation models on your data for improved performance, as a unique offering to your customers. This includes techniques such as supervised fine-tuning (SFT), reinforcement learning from human feedback, and distillation. We will dive deep into SFT, learn how Vertex lets you customize Gemini models with hundreds of examples cost-effectively. We will also discuss distillation, a technique that uses a teacher model to train smaller student models to perform certain tasks better, at a lower cost and with lower latency. You will also get tips from Palo Alto Networks, based on their real-world experience.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this mini course, you will explore the use of Gemini, PaLM, and other generative AI models using Vertex AI Studio. You will learn how to design and tune prompts to ensure the best outputs for your applications and discuss other services in Vertex AI Studio to improve output quality.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Google Cloud is building the next generation of Observability solutions using Gemini and BigQuery. In this session, we’ll show you how we can remove fragmentation for your logs, metrics, traces, events, billing data sources using Google BigQuery on Google Cloud Operations Suite to perform Observability analytics. Targeted audience includes Developers, DevOps Engineers, SRE, and Cloud Architects.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this mini course you will learn about different prompt design and engineering techniques commonly used in LLM-powered applications such as few-shot prompting and chain of thought reasoning. You will then apply these practices in a hands-on lab environment using the PaLM and Gemini Pro APIs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Join this session to learn how Gemini in BigQuery can help you accelerate time to insights by enhancing productivity, and optimizing the cost and performance of data workloads.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

With built-in multi-modal capabilities, Gemini enables powerful solutions spanning across text, images, and videos. In this session, you will learn about real-world use cases and best practices for building Gemini-powered applications that address compelling business use cases. You also learn how enterprises are driving transformation with Gemini on Vertex AI.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In the fast-paced world of data driven decision making, organizations are grappling with the challenge of realizing insights as quickly and efficiently as possible. While data products have been quite a steady architectural pillar within data ecosystems, AI has recently taken the world by storm - helping to accelerate insights at a pace previously unimaginable. In this session hear from Fahad Ahmad, data science leader at Halliburton, about their strategy to transform Halliburton’s previous data swamp into a decentralized data mesh architecture utilizing open AI and data products to deliver real-time insights. Fahad will discuss eliminating fragile data pipelines, fast data-driven decision making on curated datasets, and the innovative usage of ChatGPT to expedite the creation of data products.