talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

session
by Bing Wan Tang (Government Technology Agency of Singapore) , Luis Uguina (Macquarie Banking and Financial Services Group) , Karan Bajwa (Google Cloud) , Jason Liao (Oppo) , Luis Carlos Cruz Huertas (DBS Bank)

This session explores real-world success stories from world-leading companies in financial services, retail, healthcare, gaming sectors, and more. The forum is ideal for business leaders, technical executives, product managers, and developers to see how gen AI is being used for automation, personalization, knowledge extraction, differentiated experiences, operational efficiency, and user protection within their own organizations.

This session offers a technical deep dive of the state-of-art AlloyDB AI capabilities for building highly accurate and relevant generative AI applications using real-time data. We’ll cover vector search using Google Research’s ScaNN index technology and cover how you can utilize Gemini from AlloyDB operators to seamlessly integrate into your application. Discover AlloyDB AI natural language feature, a new way to interact with databases and how it accurately and securely answers your questions. Also learn about the latest research between Google and NVIDIA on GPU-accelerated vector index builds in databases.

session
by Vithal Shirodkar (Google Cloud) , Pan Yong (Home Team Science & Technology Agency)

Learn how Google Distributed Cloud is bringing the power of cloud infrastructure and AI models to on-premise computing to unlock new opportunities for decision makers and developers without compromising on data residency, connectivity, regulatory compliance, or local data processing. Explore the future of distributed cloud computing with experts from Google Cloud and customers. Finally, we will dive deep into the latest generative AI, enterprise search, and configurations available in Google Distributed Cloud.

"Talk-to-Manuals" leverages Google Cloud's Vertex AI Search and Conversation, along with a multi-turn GenAI agent, to provide customers with easy access to vehicle information. The solution integrates and indexes diverse data sources—including unstructured PDFs and structured data from FAQs—for efficient retrieval. Semantic search, powered by LLMs, understands user intent, delivering highly relevant results beyond keyword matching. The interactive multi-turn conversation guides users towards precise information, refining queries and increasing accuracy. Data filtering based on vehicle attributes (model, year, engine type) combined with sophisticated GenAI-powered post-processing ensures tailored results. The successful implementation of "Talk-to-Manuals" significantly improved customer service by providing a more accessible and intuitive way for customers to find information.

Join GXO's SVP of Data/AI/ML, Ramin Rastin, and Senior Director of Data, Americo, as they share their remarkable journey of implementing Generative AI and Vertex AI within their organization. Discover how GXO leveraged Google Cloud's cutting-edge technology to achieve significant scalability and efficiency gains, enhance business operations, and improve user experience. In this session, you'll learn how GXO: Scaled rapidly: Deployed 15-20 AI-powered assistants within the first six months of 2025, each reducing workloads by 40-60%. Enhanced flexibility: Developed a Generic Chat Assistant mid-project to adapt to evolving business needs. Improved accessibility: Created an intuitive user interface for all users, regardless of technical expertise. Centralized control: Implemented an Admin Page with role-based access control for enhanced security and governance.

session
by Alex Braylan (Revionics) , Aakriti Bhargava (Revionics) , Derek Egan (Google Cloud) , Anand Iyer (Google Cloud) , Bryan Goodman (Ford Motor Company)

Ready to build production-grade AI agents to solve real-world challenges? Join us for a deep dive on the Vertex AI Agent Builder. Learn how to create sophisticated, multi-agent applications that combine generative AI with deterministic workflows. We'll cover key features including the Google Agent Development Kit (ADK), the Vertex AI Agent Garden, and the Vertex AI Agent Engine. We’ll show you how to build, deploy and monitor agents in production. You will also learn directly from customers about their agents built on Vertex AI.

Generative AI and machine learning (ML) are transforming industries, but many smaller organizations believe these technologies are out of reach due to limited resources and specialized skills. In this session, we’ll demonstrate how BigQuery is changing the game, making gen AI and ML accessible to teams of all sizes. Learn how BigQuery – with its serverless architecture, built-in ML capabilities, and integration with Vertex AI – empowers smaller teams to unlock the power of AI, drive innovation, and gain a competitive edge.

session
by Ed Olson-Morgan (Marsh McLennan) , Swarup Pogalur (Wells Fargo) , Geir Sjurseth (Google Cloud) , Antony Arul (Google Cloud)

Organizations are racing to deploy generative AI solutions built on large language models (LLMs), but struggle with management, security, and scalability. Apigee is here to help. Join us to discover how the latest Apigee updates enable you to manage and scale gen AI at the enterprise level. Learn from Google’s own experience and our work with leading customers to address the challenges of productionizing gen AI.

Unlock the power of lightning-fast vector search for your generative AI applications. This session dives deep into Memorystore vector search, demonstrating how to achieve single-digit millisecond latency on over a billion vectors. Explore cutting-edge gen AI application architectures that leverage vector search and other Google Cloud services to deliver exceptional user experiences.

Discover real-world examples of how generative AI is delivering tangible success across key industries – and where this technology is headed next. Hear from two experienced Google Cloud partners, EPAM and Persistent, as they share their experiences of harnessing the broad power of gen AI to solve niche challenges within traditional organizations.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, host Murilo is joined by returning guest Paolo, Data Management Team Lead at dataroots, for a deep dive into the often-overlooked but rapidly evolving domain of unstructured data quality. Tune in for a field guide to navigating documents, images, and embeddings without losing your sanity. What we unpack: Data management basics: Metadata, ownership, and why Excel isn’t everything.Structured vs unstructured data: How the wild west of PDFs, images, and audio is redefining quality.Data quality challenges for LLMs: From apples and pears to rogue chatbots with “legally binding” hallucinations.Practical checks for document hygiene: Versioning, ownership, embedding similarity, and tagging strategies.Retrieval-Augmented Generation (RAG): When ChatGPT meets your HR policies and things get weird.Monitoring and governance: Building systems that flag rot before your chatbot gives out 2017 vacation rules.Tooling and gaps: Where open source is doing well—and where we’re still duct-taping workflows.Real-world inspirations: A look at how QuantumBlack (McKinsey) is tackling similar issues with their AI for DQ framework.

Build more capable and reliable AI systems by combining context-aware retrieval-augmented generation (RAG) with agentic decision-making in an enterprise AI platform, all in Java! This session covers everything from architecture, context construction, and model routing to action planning, dynamic retrieval, and recursive reasoning, as well as the implementation of essential guardrails and monitoring systems for safe deployments. Learn about best practices, trade-offs, performance, and advanced techniques like evaluations and model context protocol.

Generative AI agents have emerged as the leading architecture for implementing complex application functionality. Tools are the way that agents access the data and systems they need. But building and deploying tools at scale brings new challenges. Learn how MCP Toolbox for Databases, an open source server for gen AI tool management, enables platforms like LangGraph and Vertex AI to easily connect to enterprise databases.

Join us for an interactive session where we’ll build, deploy, and scale inference apps. Imagine creating and launching generative AI apps that deliver personalized recommendations and stunning images, all with the unparalleled efficiency and scalability of serverless computing. You’ll learn how to build gen AI apps effortlessly using Gemini Code Assist; deploy gen AI apps in minutes on Cloud Run, using Vertex AI or on-demand, scale-to-zero serverless GPUs; and optimize the performance and cost of AI workloads by implementing best practices.

This talk demonstrates a fashion app that leverages the power of AlloyDB, Google Cloud’s fully managed PostgreSQL-compatible database, to provide users with intelligent recommendations for matching outfits. User-uploaded data of their clothes triggers a styling insight on how to pair the outfit with matching real-time fashion advice. This is enabled through an intuitive contextual search (vector search) powered by AlloyDB and Google’s ScaNN index to deliver faster vector search results, low-latency querying, and response times. While we’re at it, we’ll showcase the power of the AlloyDB columnar engine on joins required by the application to generate style recommendations. To complete the experience, we’ll engage the Vertex AI Gemini API package from Spring and LangChain4j integrations for generative recommendations and a visual representation of the personalized style. This entire application is built on a Java Spring Boot framework and deployed serverlessly on Cloud Run, ensuring scalability and cost efficiency. This talk explores how these technologies work together to create a dynamic and engaging fashion experience.

Move your generative AI projects from proof of concept to production. In this interactive session, you’ll learn how to automate key AI lifecycle processes—evaluation, serving, and RAG—to accelerate your real-world impact. Get hands-on advice from innovative startups and gain practical strategies for streamlining workflows and boosting performance.

Learn how LG AI Research uses Google Cloud AI Hypercomputer to build their EXAONE family of LLMs and innovative Agentic AI experiences based the models. EXAONE 3.5, class of bilingual models that can learn and understand both Korean and English, recorded world-class performance in Korean. The collaboration between LG AI Research and Google Cloud enabled LG to significantly enhance model performance, reduce inference time, and improve resource efficiency through Google Cloud's easy-to-use scalable infrastructure

Generative AI is transforming every role in your organization. In this panel, Google and AI platform leaders Salesforce and Atlassian will share how Gemini for Google Workspace and partner integrations can transform how your organization works. And Wayfair will share how they’re using Gemini and Salesforce to drive their business. Discover how Workspace empowers the future of work with Gemini integrated into how every role gets work done, boosting productivity and unlocking growth.