talk-data.com talk-data.com

Topic

API

Application Programming Interface (API)

integration software_development data_exchange

45

tagged

Activity Trend

65 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Google Cloud Next '24 ×

A hands-on coding session and deep dive for connecting Gemini to real-world systems, data, and APIs with Function Calling and Reasoning Engine. Function calling helps developers build online generative AI applications that have access to the latest data and information. We'll dive into practical use cases like using natural language to interact with SQL databases, automating complex workflows, and enhancing your chatbots with real-time data. You'll be equipped to connect LLMs to any API or system and extend the capabilities of what LLMs can do.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In a fireside chat, Google Workspace generative AI partners Salesforce, Outreach, and Typeface will share how customers are adopting generative AI tools to gain real-world business value. Whether it's extending the value of Gemini with Salesforce, leveraging Workspace APIs to power conversational intelligence, or inserting text or images into Workspace apps like Gmail, Docs, Sheets, or Slides, the customer benefits are countless.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Complexity and fragmentation of current generative AI models can hinder development and innovation. Vertex Model Garden offers over 130 models on one platform, facilitating innovation with features like one-click deployment for leading open models and integration with Hugging Face. We'll explore how Vertex AI Model Garden transforms AI lifecycle management, from experimentation to scalable deployment. Discover efficiency with an end-to-end example of how to find models, manage data, configure endpoints, and deploy models as scalable APIs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this mini course, you will explore automatic side-by-side evaluation on Vertex AI using AutoSxS, a tool for evaluating models relative to each other. You will understand the application of side-by-side evaluation how you can use AutoSxS through the Vertex AI API or Vertex AI SDK for Python.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Navigating OAuth, OIDC, SAML, or any of the myriad technologies and processes around using Google accounts and APIs can be daunting, and if you feel this way you're not alone. In this session, we'll help you understand and navigate the landscape of authentication and authorization related technologies Google Workspace developers most often encounter.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Wrangle your API ecosystem into a unified landscape with Apigee API Hub. In this session, we’ll show how API Hub can help you: - Uncover a comprehensive inventory of all your APIs, regardless of their location or provider, shattering siloed visibility. - Foster collaboration and governance with a single, unified view of API usage across your organization, empowering teams to make informed decisions. - Fuel innovation with better API discoverability and reusability.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this mini course you will learn about different prompt design and engineering techniques commonly used in LLM-powered applications such as few-shot prompting and chain of thought reasoning. You will then apply these practices in a hands-on lab environment using the PaLM and Gemini Pro APIs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how Firebase can be a catalyst for faster, more efficient app development. We’ll show you how Firebase's prebuilt backend services – including authentication, databases, and storage – eliminate boilerplate code and server headaches so you can focus on crafting superior user experiences. You’ll learn how Firebase supports rapid prototyping and iteration, and how Firebase's intuitive tools and APIs democratize mobile development, making it easy for you to get started even if you're not a mobile app development specialist.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Obscurity isn't security. Unmanaged and undocumented APIs increase your attack surface by 30%. In today's landscape where API attacks are prevalent, don't risk data breaches and security vulnerabilities. Join our experts from Google Cloud and BMW to: 1) Discover how Apigee's Advanced API Security helps you shine a light on your shadow APIs 2) Learn how BMW uses Advanced API Security to protect its 250 APIs and five Bn API calls

Join our experts from Google Cloud and BMW to 1/ Discover how Apigee's Advanced API Security helps you shine a light on your shadow APIs 2/ Learn how BMW uses Advanced API Security to protect their 250 APIs and 5 Bn API calls.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Explore Mongodb Atlas — MongoDB’s developer data platform, and learn how to integrate it with various Google Cloud services. During this lab lounge, you will create a fully managed database deployment, set up serverless Triggers that react to database events, and build Atlas Functions to communicate with Google Cloud APIs.

Additionally, you will explore Google Cloud’s NLP APIs, perform sentiment analysis on incoming data, learn how to replicate operational datasets from MongoDB Atlas to BigQuery and build an ML model for classification.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

session
by Bali Mangat (Charles Schwab) , Dimitris Meretakis (Google Cloud) , Mark Szarka (Deloitte Consulting LLP) , Guangsha Shi (Google) , Kamal Khilnani (Charles Schwab)

Generative AI opens up new ways to build information processing workflows, search capabilities, and enterprise applications. This session introduces a suite of new Vertex APIs that are purpose-built to accelerate building your apps. We will cover common patterns like understanding complex documents, extracting and retrieving the most relevant information, and creating high-quality gen AI experiences grounded in your first and third-party data sources.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, DoiT will explore the Google Kubernetes Engine (GKE) implementation of the Gateway API, and how it differs from Ingress. This talk will expand upon the advantages and future capabilities as well as how to migrate from Ingress to Gateway with ease. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

We'll explore how integrating AI, serverless computing, data analytics, and APIs can revolutionize the retail landscape. Learn how Google Cloud Run, Apigee, BigQuery, and Vertex AI collaborate to create personalized shopping experiences, streamline operations, and drive sustainability. Key takeaways include implementing conversational AI for enhanced customer interaction, leveraging BigQuery for data-driven insights, and using Cloud Run for efficient, scalable retail solutions.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Generative AI can transform customer service, enhance employee productivity, automate business processes, and more. And by using Serverless, Google Cloud's “pay-as-you-go” compute platform to provide these experiences, you can focus on what's core to your business and leave the autoscaling to Google. In this talk, you'll learn how to run Google Cloud generative AI tools on Serverless, including how to use the Vertex AI Gemini API, how to use function calling to supplement a gen AI model with Serverless endpoints, and more.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini Pro Vision and Gemini Pro models. In this spotlight lab, you will learn how to use the Vertex AI Gemini API with the Vertex AI SDK for Python to interact with the Gemini Pro (gemini-pro) model and the Gemini Pro Vision (gemini-pro-vision) model.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

At Google I/O last year, we announced several new AI Firebase Extensions using the PaLM API. This year, we’ve added support for Google's latest Gemini models. Easily add a chatbot, text summarizer, content generator, vector database pipeline, and more to your app without learning new APIs. In this session, get an end-to-end view of how you can use Firebase and Gemini to create an enterprise-ready customer support app. Build many apps with the powerful combo of Gemini's multimodal features and Firebase's convenient suite of developer tools.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

As machine learning (ML) systems continue to evolve, the ability to scale complex ML workloads becomes crucial. Scalability can be considered along two dimensions: expansive training of large language models (LLMs) and intricate distribution of reinforcement learning (RL) systems. Each has its own set of challenges, from computational demands of LLMs to complex synchronization in distributed RL.

This session explores the integration of Ray, Google Kubernetes Engine (GKE) and ML accelerators like tensor processing units (TPUs) as a powerful combination to develop advanced ML systems at scale. We discuss Ray and its scalable APIs, its mature integration with GKE and ML accelerators, and demonstrate how it has been used for LLMs and re-implementing the powerful RL algorithm, Muzero.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Simplify and automate Application Programming Interface (API) development and integration with Gemini. Design and build APIs faster, connect any application, and reduce errors with AI-powered insights.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Large language models offer capabilities that overlap with more traditional approaches to natural language processing tasks like translation. Multimodal large language models have an even broader overlap with traditional speech and image models. You can now choose which approach best suits your needs. Here, you will learn about their strengths and weaknesses compared to neural machine translation techniques and receive an overview of the latest advancements in our SOTA Cloud Translation API, combing ease of use with contextual capabilities of generative AI models, to enhance our customers' translations at scale. Experience how new models trained to perform both transcription and translation can go from speech to text in a target language using one large model.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Generative AI is rapidly growing in business and the popular imagination. Google Cloud was at the forefront of this revolution with the introduction of the Transformer architecture in 2017 and more recently, with the release of Gemini models. This session introduces JAX, a powerful framework and ecosystem for large model development, which we use to develop our Gemini models, and Keras - an easy to use higher level API for deep learning and gen AI.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.