How can organizations create scalable, engaging communications that capture attention and break through all of the noise? In the session, explore how the newest Google Workspace app is empowering teams of all sizes to build dynamic video content with Gemini to connect with coworkers and partners in new ways and break through the noise. Explore the latest customer insights, experience Vids in action, and discover what’s next.
talk-data.com
Topic
LLM
Large Language Models (LLM)
1405
tagged
Activity Trend
Top Events
"Talk-to-Manuals" leverages Google Cloud's Vertex AI Search and Conversation, along with a multi-turn GenAI agent, to provide customers with easy access to vehicle information. The solution integrates and indexes diverse data sources—including unstructured PDFs and structured data from FAQs—for efficient retrieval. Semantic search, powered by LLMs, understands user intent, delivering highly relevant results beyond keyword matching. The interactive multi-turn conversation guides users towards precise information, refining queries and increasing accuracy. Data filtering based on vehicle attributes (model, year, engine type) combined with sophisticated GenAI-powered post-processing ensures tailored results. The successful implementation of "Talk-to-Manuals" significantly improved customer service by providing a more accessible and intuitive way for customers to find information.
This meetup is a space for developers actively working with any open-source AI libraries, frameworks, or tools, to share their projects, challenges, and solutions. Whether you're building with LangChain, Haystack, Transformers, TensorFlow, PyTorch, or any other open-source AI tool, we want to hear from you. This meetup will provide an opportunity to connect with other developers, share practical tips, and get inspired to build even more with open-source AI on Google Cloud. Come ready to contribute, and let's learn from each other!
Unleash the power of Gemini with Vertex AI Studio. This hands-on lab guides you through using Gemini for image analysis, prompt engineering, and conversational AI, all within a user-friendly interface. Learn to design prompts and generate content directly from the Google Cloud console.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Concerned about AI hallucinations? While AI can be a valuable resource, it sometimes generates inaccurate, outdated, or overly general responses - a phenomenon known as "hallucination." This hands-on lab teaches you how to implement a Retrieval Augmented Generation (RAG) pipeline to address this issue. RAG improves large language models (LLMs) like Gemini by grounding their output in contextually relevant information from a specific dataset. Learn to generate embeddings, search vector space, and augment answers for more reliable results.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Learn how "Project SEALD" – a collaboration between Google and AI Singapore – is building LLMs for the region. Discover why cultural context matters and how you can implement similar solutions.
Join this session to discover how a phone plan selection app, built with Flutter and Firebase, leverages Gemini 2.0 to enhance and simplify the customer experience. Gain insights into the technical architecture, identify actionable strategies to implement similar AI-driven solutions in your own apps, and understand the key principles of using AI to enhance the customer experience.
Organizations are racing to deploy generative AI solutions built on large language models (LLMs), but struggle with management, security, and scalability. Apigee is here to help. Join us to discover how the latest Apigee updates enable you to manage and scale gen AI at the enterprise level. Learn from Google’s own experience and our work with leading customers to address the challenges of productionizing gen AI.
Language models have already evolved to do much more than language tasks, principally in the domain of image, audio, and soon video. Join Mostafa Dehghani to explore the emergent frontier of multimodal generation, what Gemini’s world knowledge unlocks that domain specific models cannot create, and how developers should be thinking about AI as a next-generation creative partner.
Join us for an insightful discussion with Understood.org, a leading nonprofit dedicated to supporting the 70 million Americans with learning and thinking differences, and discover how empowering neurodiverse employees with tools like Google Workspace and Gemini can foster a more productive workplace.
Learn how Database Migration Service can help you modernize your SQL Server databases to unleash the power of cloud databases and open source PostgreSQL! Convert your SQL Server schema and T-SQL code to PostgreSQL dialect with a click of a button in the DMS Conversion Workspace. Some objects could not be fully converted? Gemini can suggest a fix. Not yet familiar with PostgreSQL features? Ask Gemini to teach you how to convert SQL Server features to PostgreSQL equivalent ones. While Gemini is there - ask it to optimize the converted code or add some comments to explain the business logic. Once your database is fully converted and optimized you can migrate the data with minimal downtime using the change data capture powered migration job and complete your migration journey.
Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, host Murilo is joined by returning guest Paolo, Data Management Team Lead at dataroots, for a deep dive into the often-overlooked but rapidly evolving domain of unstructured data quality. Tune in for a field guide to navigating documents, images, and embeddings without losing your sanity. What we unpack: Data management basics: Metadata, ownership, and why Excel isn’t everything.Structured vs unstructured data: How the wild west of PDFs, images, and audio is redefining quality.Data quality challenges for LLMs: From apples and pears to rogue chatbots with “legally binding” hallucinations.Practical checks for document hygiene: Versioning, ownership, embedding similarity, and tagging strategies.Retrieval-Augmented Generation (RAG): When ChatGPT meets your HR policies and things get weird.Monitoring and governance: Building systems that flag rot before your chatbot gives out 2017 vacation rules.Tooling and gaps: Where open source is doing well—and where we’re still duct-taping workflows.Real-world inspirations: A look at how QuantumBlack (McKinsey) is tackling similar issues with their AI for DQ framework.
AI is revolutionizing observability. Learn about Cloud SQL AI-powered Database Insights and how it can help you optimize your queries and boost database performance. We’ll dive deep into the new Insights capabilities for MySQL, PostgreSQL, and SQL Server, including the Gemini-powered chat agent. Learn how to troubleshoot those tricky database performance issues and get practical tips to improve the performance of your applications.
Gemini was built from the ground up to support our breakthrough long context window, with up to 2 million tokens in our largest models. Join Nikolay Savinov to explore how to get the most out of long context and what a world of infinite context might look like.
Join us for an interactive session where we’ll build, deploy, and scale inference apps. Imagine creating and launching generative AI apps that deliver personalized recommendations and stunning images, all with the unparalleled efficiency and scalability of serverless computing. You’ll learn how to build gen AI apps effortlessly using Gemini Code Assist; deploy gen AI apps in minutes on Cloud Run, using Vertex AI or on-demand, scale-to-zero serverless GPUs; and optimize the performance and cost of AI workloads by implementing best practices.
Learn how to evaluate and optimize the impact of AI-assisted software development with Gemini Code Assist. This session covers processes for measuring AI-assistance effectiveness, exploring quantitative and qualitative measures available with Gemini Code Assist, and integrating with Cloud Monitoring and Cloud Logging. Discover how to leverage DevOps Research and Assessment (DORA) metrics to track productivity gains. Whether you’re a developer, team lead, architect, or IT manager, you’ll gain insights into measuring the impact of AI assistance.
AI-enabled browser agents are in the news now, but it’s not always clear how they solve real-world problems. In this session, we’ll share our experience building a web browser agent by integrating Gemini into an end-to-end service that follows text instructions to take actions in a web application. We’ll take you through our journey of creating the agent, share the research that inspired us, and show how we’ve used the system to tackle practical problems like validating user flows in the UI and semantically checking web links.
This talk demonstrates a fashion app that leverages the power of AlloyDB, Google Cloud’s fully managed PostgreSQL-compatible database, to provide users with intelligent recommendations for matching outfits. User-uploaded data of their clothes triggers a styling insight on how to pair the outfit with matching real-time fashion advice. This is enabled through an intuitive contextual search (vector search) powered by AlloyDB and Google’s ScaNN index to deliver faster vector search results, low-latency querying, and response times. While we’re at it, we’ll showcase the power of the AlloyDB columnar engine on joins required by the application to generate style recommendations. To complete the experience, we’ll engage the Vertex AI Gemini API package from Spring and LangChain4j integrations for generative recommendations and a visual representation of the personalized style. This entire application is built on a Java Spring Boot framework and deployed serverlessly on Cloud Run, ensuring scalability and cost efficiency. This talk explores how these technologies work together to create a dynamic and engaging fashion experience.
Join Woosuk Kwon, Founder of vLLM, Robert Shaw, Director of Engineering at Red Hat, and Brittany Rockwell, Product Manager for vLLM on TPU, to learn about how vLLM is helping Google Cloud customers serve state-of-the-art models with high performance and ease of use across TPUs and GPUs.
Learn how LG AI Research uses Google Cloud AI Hypercomputer to build their EXAONE family of LLMs and innovative Agentic AI experiences based the models. EXAONE 3.5, class of bilingual models that can learn and understand both Korean and English, recorded world-class performance in Korean. The collaboration between LG AI Research and Google Cloud enabled LG to significantly enhance model performance, reduce inference time, and improve resource efficiency through Google Cloud's easy-to-use scalable infrastructure