AI Hypercomputer is a revolutionary system designed to make implementing AI at scale easier and more efficient. In this session, we’ll explore the key benefits of AI Hypercomputer and how it simplifies complex AI infrastructure environments. Then, learn firsthand from industry leaders Shopify, Technology Innovation Institute, Moloco, and LG AI Research on how they leverage Google Cloud’s AI solutions to drive innovation and transform their businesses.
talk-data.com
Topic
Cloud Computing
4055
tagged
Activity Trend
Top Events
APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.
Build fully integrated streaming pipelines on Google Cloud and learn how to leverage AlloyDB, Datastream, BigQuery, Looker, and Vertex AI for real-time data analysis.
Empower your organization to achieve greater efficiency and solve critical business challenges with Google AppSheet's innovative no-code platform. This exclusive panel features industry leaders who achieved remarkable results by using AppSheet to streamline workflows and empower their teams. Learn first-hand their inspiring journeys, gain practical tips, and unlock the full potential of AppSheet to drive growth and innovation within your own organization.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Experience the transformative power of real-time interactions with your dashboards through natural language. Say goodbye to waiting for analysts to provide graphics and conclusions. Witness the immediacy of getting your questions answered accurately and precisely, grounded in your data warehouse. Join us for an immersive exploration of data-driven decision-making, complete with a live demo showcasing a practical business scenario. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Simplify and accelerate your Azure Local deployments with Lenovo ThinkAgile MX, a fully integrated solution designed for Azure Local. With Lenovo Open Cloud Automation (LOCA), you can automate the deployment of entire clusters in just a few clicks—making it easy for IT teams or even field technicians to set up systems at the edge. In this session, you’ll see a live demonstration of how ThinkAgile MX and LOCA reduce complexity and enable consistent, reliable outcomes every time.
As Satya Nadella unveils the next steps in integrating AI agents across business functions, Kathleen Mitford, CVP of Industry Marketing, and Satish Thomas, CVP of Business and Industry Solutions Engineering, explore how Microsoft Cloud for Industry enables customers and partners to build, adapt, deploy, and manage solutions tailored to each industry’s unique needs. Focusing on Copilot and Generative AI, this broadcast will highlight AI’s transformative impact on productivity, efficiency, and innovation across industries. By combining the power of the Microsoft Cloud, with industry-specific AI insights and capabilities, and the expertise of a robust partner ecosystem, Microsoft’s approach unlocks AI’s full potential, driving significant outcomes for every industry.
Businesses need to predict what customers want and create personalized experiences to gain a competitive advantage and drive revenue. They need to deliver customized, tailored interactions that increase customer acquisition, improve loyalty and increase satisfaction. Join Fullstory’s Head of Data Products to learn how Data + Engineering teams can supercharge tools like DialogFlow and BigQuery with unprecedented behavioral data to accurately forecast and create experiences that outpace the competition and keep customers coming back for more. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
There are a lot of amazing AI features being announced at Google Cloud Next. In order to take full advantage of these, you need to make sure your data is being managed in a secure, centralized way. In this talk, you’ll learn how to set up your lakehouse to get your data ready for downstream workloads. You’ll view a demo involving an architecture of Google Cloud products that includes managing permissions on your data, configuring metadata management, and performing transformations using open source frameworks.
Over the last decade, Big Data was everywhere. Let's set the record straight on what is and isn't Big Data. We have been consumed by a conversation about data volumes when we should focus more on the immediate task at hand: Simplifying our work.
Some of us may have Big Data, but our quest to derive insights from it is measured in small slices of work that fit on your laptop or in your hand. Easy data is here— let's make the most of it.
📓 Resources Big Data is Dead: https://motherduck.com/blog/big-data-is-dead/ Small Data Manifesto: https://motherduck.com/blog/small-data-manifesto/ Small Data SF: https://www.smalldatasf.com/
➡️ Follow Us LinkedIn: https://linkedin.com/company/motherduck X/Twitter : https://twitter.com/motherduck Blog: https://motherduck.com/blog/
Explore the "Small Data" movement, a counter-narrative to the prevailing big data conference hype. This talk challenges the assumption that data scale is the most important feature of every workload, defining big data as any dataset too large for a single machine. We'll unpack why this distinction is crucial for modern data engineering and analytics, setting the stage for a new perspective on data architecture.
Delve into the history of big data systems, starting with the non-linear hardware costs that plagued early data practitioners. Discover how Google's foundational papers on GFS, MapReduce, and Bigtable led to the creation of Hadoop, fundamentally changing how we scale data processing. We'll break down the "big data tax"—the inherent latency and system complexity overhead required for distributed systems to function, a critical concept for anyone evaluating data platforms.
Learn about the architectural cornerstone of the modern cloud data warehouse: the separation of storage and compute. This design, popularized by systems like Snowflake and Google BigQuery, allows storage to scale almost infinitely while compute resources are provisioned on-demand. Understand how this model paved the way for massive data lakes but also introduced new complexities and cost considerations that are often overlooked.
We examine the cracks appearing in the big data paradigm, especially for OLAP workloads. While systems like Snowflake are still dominant, the rise of powerful alternatives like DuckDB signals a shift. We reveal the hidden costs of big data analytics, exemplified by a petabyte-scale query costing nearly $6,000, and argue that for most use cases, it's too expensive to run computations over massive datasets.
The key to efficient data processing isn't your total data size, but the size of your "hot data" or working set. This talk argues that the revenge of the single node is here, as modern hardware can often handle the actual data queried without the overhead of the big data tax. This is a crucial optimization technique for reducing cost and improving performance in any data warehouse.
Discover the core principles for designing systems in a post-big data world. We'll show that since only 1 in 500 users run true big data queries, prioritizing simplicity over premature scaling is key. For low latency, process data close to the user with tools like DuckDB and SQLite. This local-first approach offers a compelling alternative to cloud-centric models, enabling faster, more cost-effective, and innovative data architectures.
This hands-on lab equips you with the practical skills to build and deploy a real-world AI-powered chat application leveraging the Gemini LLM APIs. You'll learn to containerize your application using Cloud Build, deploy it seamlessly to Cloud Run, and explore how to interact with the Gemini LLM to generate insightful responses. This hands-on experience will provide you with a solid foundation for developing engaging and interactive conversational applications.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
This hands-on lab empowers you to build a cutting-edge multimodal question answering system using Google's Vertex AI and the powerful Gemini family of models. By constructing this system from the ground up, you'll gain a deep understanding of its inner workings and the advantages of incorporating visual information into Retrieval Augmented Generation (RAG). This hands-on experience equips you with the knowledge to customize and optimize your own multimodal question answering systems, unlocking new possibilities for knowledge discovery and reasoning.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Unlock the power of AI! Build your own intelligent agent with hands-on guidance from Google Cloud Consulting's leading AI experts. No experience needed, just a desire to innovate and build into the future with AI!
It's finally possible to bring the awesome power of Large Language Models (LLMs) to your laptop. This talk will explore how to run and leverage small, openly available LLMs to power common tasks involving data, including selecting the right models, practical use cases for running small models, and best practices for deploying small models effectively alongside databases.
Bio: Jeffrey Morgan is the founder of Ollama, an open-source tool to get up and run large language models. Prior to founding Ollama, Jeffrey founded Kitematic, which was acquired by Docker and evolved into Docker Desktop. He has previously worked at companies including Docker, Twitter, and Google.
➡️ Follow Us LinkedIn: https://www.linkedin.com/company/small-data-sf/ X/Twitter : https://twitter.com/smalldatasf Website: https://www.smalldatasf.com/
Discover how to run large language models (LLMs) locally using Ollama, the easiest way to get started with small AI models on your Mac, Windows, or Linux machine. Unlike massive cloud-based systems, small open source models are only a few gigabytes, allowing them to run incredibly fast on consumer hardware without network latency. This video explains why these local LLMs are not just scaled-down versions of larger models but powerful tools for developers, offering significant advantages in speed, data privacy, and cost-effectiveness by eliminating hidden cloud provider fees and risks.
Learn the most common use case for small models: combining them with your existing factual data to prevent hallucinations. We dive into retrieval augmented generation (RAG), a powerful technique where you augment a model's prompt with information from a local data source. See a practical demo of how to build a vector store from simple text files and connect it to a model like Gemma 2B, enabling you to query your own data using natural language for fast, accurate, and context-aware responses.
Explore the next frontier of local AI with small agents and tool calling, a new feature that empowers models to interact with external tools. This guide demonstrates how an LLM can autonomously decide to query a DuckDB database, write the correct SQL, and use the retrieved data to answer your questions. This advanced tutorial shows you how to connect small models directly to your data engineering workflows, moving beyond simple chat to create intelligent, data-driven applications.
Get started with practical applications for small models today, from building internal help desks to streamlining engineering tasks like code review. This video highlights how small and large models can work together effectively and shows that open source models are rapidly catching up to their cloud-scale counterparts. It's never been a better time for developers and data analysts to harness the power of local AI.
Join us to discuss serverless computing and event-driven architectures with Cloud Run functions. Learn a quick and secure way to connect services and build event-driven architectures with multiple trigger types (HTTP, Pub/Sub, and Eventarc). And get introduced to Eventarc Advanced, centralized access control to your events with support for cross-project delivery.
Building an assistant capable of answering complex, company-specific questions and executing workflows requires first building a powerful Retrieval Augmented Generation (RAG) system. Founding engineer Eddie Zhou explains how Glean built its RAG system on Google Cloud— combining a domain-adapted search engine with dynamic prompts to harness the full capabilities of Gemini's reasoning engine. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Cloud providers deliver powerful security capabilities, yet most organizations struggle to fully leverage them to ensure their environment is secure-by-design. In this session, we’ll explore how to harness the Azure native security controls to accelerate a prevention-first model. Learn how Native brings order to preventive policies across cloud providers, allowing you to safely achieve a secure-by-design environment in minutes.
Use Google Cloud products to create architecture designs that are highly secure and robust. To protect your sensitive data and maintain trust with stakeholders, building a highly secure infrastructure is imperative to ensure the overall security and stability of an organization's digital presence. Attend the session to gain insights into the elements encompassed in a secure architectural framework with cloud native tools. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Co-create your future with strategic, AI-powered business visioning! Join Google Cloud Consulting for interactive whiteboarding sessions designed to solve your biggest challenges and unlock new opportunities.
If you’re responsible for an enterprise cloud, you know how expensive and time-consuming it is to operate hundreds of individual services, at all hours of the day. That’s why many engineering leaders are now using AI to manage their cloud — from rightsizing resources to fixing availability issues. But is this safe? And what happens when something, inevitably, goes wrong? Sedai CEO Suresh Mathew explains the important benefits — and risks — of trusting your cloud to AI.