This session will explore HAVI’s transformative shift from a traditional security information and event management (SIEM) solution to a platform that provides threat detection at petabyte scale and millisecond search performance using the Chronicle SecOps Suite. We’ll see how HAVI navigated the complex regulatory environment to set up a system that allowed them to benefit from a fully managed service, and independently addressed scalability and platform flexibility challenges to create a more cost-effective and high-performing solution.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
talk-data.com
Topic
GCP
Google Cloud Platform (GCP)
1670
tagged
Activity Trend
Top Events
Imagine running your non-relational workloads at relational consistency and unlimited scale. Yahoo! dared to dream it, and with Google Spanner, it plans to make it a reality. Dive into its modernization plans for the Mail platform, consolidating diverse databases, and unlocking innovation with unprecedented scale. Be part of the conversation and see how Spanner can transform your database landscape.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
We cover the they key principles of building a modern, cloud-first data platform: - A reference architecture for a simple, flexible, modular lake house - A process to select the right point of contact for the job - How to build and organize your team to continue to grow the platform In this session, we discuss not just what to build, but how to build it, and why the topics we cover are important for success.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Generative AI is driving the next industrial revolution and businesses are racing to use it to improve customer experiences, healthcare, and operations. But to succeed, they need a plan. In this session, we'll highlight practical use cases and challenges of scaling AI in 2024. We'll also show you how NVIDIA DGX Cloud on Google Cloud supports the entire AI app lifecycle, from development to deployment. You'll learn how to speed up the return on investment of AI.
By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Implementing generative AI applications requires large amounts of computation that can seamlessly scale to train, fine-tune, and serve the models. NVIDIA and Google Cloud have partnered to offer a range of GPU options to address this challenge. Using NVIDIA GPUs with Google Kubernetes Engine removes the heavy lifting needed to set up AI deployments, automate orchestration, manage large training clusters, and serve low-latency inference. Join us to see what ElevenLabs has built using NVIDIA GPUs with GKE. Please note: seating is limited and on a first-come, first served basis; standing areas are available
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
We all have a role to play in fighting against global warming.
You’ll discover the fundamental concepts of different infrastructure design, from legacy to modern serverless hosting. We’ll cover different ways to deploy the same application on these different infrastructures and the key impacts they have on different metrics: duration, cost and CO2.
You’ll learn the core concept of serverless services and their advantages. But also how to know the carbon footprint impact of your projects thanks to the carbon footprint dashboard.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Learn about how to publish your Google Workspace apps and Add-ons in the Google Workspace Marketplace. We’ll share with you details on how to create a marketplace listing, how to configure your app using the Marketplace SDK, and discuss our app review process. We’ll also share some tips and tricks on how to best manage your app listing.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Implementing generative AI applications with large language models (LLMs), and diffusion models requires large amounts of computation that can seamlessly scale to train, fine-tune, and serve the models. Google Cloud TPUs. Cohere is leveraging the compute-heavy Cloud TPU v4 and v5e to train sophisticated gen AI models that meet the heightened needs of their enterprise users. Check out how Cohere and Cloud TPUs are delivering enterprise-tailored large language models (LLMs) that can help increase business productivity by automating time-consuming and monotonous workflows. Please note: seating is limited and on a first-come, first served basis; standing areas are available
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
It is complex and costly to change your selected regions after you set up your production AI workloads. I like to share 7 tips and GCP tools to help you SELECT the RIGHT Region. They include Network Intelligence Center, Cost Estimator, Assured Workload, Simple GCP Region Picker and so on. It will help save cost, reduce latency, be green and ready for growth.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
This session features a new low/no-code editor to go from an idea that could leverage AI into production, faster than ever before. Learn about Visual Blocks, how Google has already used it to successfully speed up production times, and how to make your very own custom nodes to integrate with anything. Learn how to get started productively with Visual Blocks ML. Enable anyone at your company to create or test working AI prototypes fast with an easy web interface by reusing the building blocks defined by Google and your engineering team.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
GitLab and Google Cloud are partnering to deliver a comprehensive, integrated DevSecOps solution that provides best-in-class reliability, efficiency and end-to-end security. In this session you'll learn how you can quickly and securely deploy workloads to Google Cloud using GitLab’s new integrations, which include streamlined IAM configuration, Artifact Registry integration, and optimized GitLab templates and workflows for Google Cloud.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Unlock the power of retrieval-augmented-generation (RAG) for generative AI apps with Vertex AI app builder platform. Vertex AI Vector Search powers blazingly fast embeddings retrieval for your search, recommendations, ad serving, and other gen AI applications. Multimodal embeddings scale your search across text, images, and other modalities. Customize your RAG pipeline with document understanding capabilities. With comprehensive offerings, Vertex AI makes building robust RAG systems easy.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
To extract the maximum performance, reliability and security out of every instance from enterprise applications to AI/ML, we built a cutting-edge system of custom silicon, security microcontrollers, and tiered scale-out offloads: Titanium. Titanium already underpins all of our latest compute instances. Join this session to see why Titanium matters, how it works, and how to maximize the benefit in your workloads.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Learn to design and deploy the Google Cloud global front end to protect, scale, and deliver web experiences from infrastructure running on-cloud or on-premise. Get an overview of the global front end with load balancing, CDN, and web protection including DDoS mitigation. Then move into programmability with service extension callouts and designing for scenarios across clouds and on-premise. We will cover integration into continuous integration/development workflows, show a demo, and learn from a customer about their use case and lessons learned.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Do you want to know your options for running Java on Google Cloud? We’ll explore various options for running workloads written using the latest Java and Jakarta EE versions on serverless offerings like Google App Engine and Google Cloud Run. Furthermore, we'll look at optimizing your run time performance using various frameworks.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Discover how MosquitoMax transformed an outdated consumer device into a cutting-edge smart system. Designed for professionals in manufacturing and consumer electronics, this session is for anyone looking to modernize decades-old technology. We’ll demo how to develop a hardware prototype so it evolves to a custom Printed Circuit Board, highlighting each integration point with Google Cloud, Firebase, and Flutter. By session end, you’ll have a completely new device with a reliable cloud backend, and smartphone app.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Obscurity isn't security. Unmanaged and undocumented APIs increase your attack surface by 30%. In today's landscape where API attacks are prevalent, don't risk data breaches and security vulnerabilities. Join our experts from Google Cloud and BMW to: 1) Discover how Apigee's Advanced API Security helps you shine a light on your shadow APIs 2) Learn how BMW uses Advanced API Security to protect its 250 APIs and five Bn API calls
Join our experts from Google Cloud and BMW to
1/ Discover how Apigee's Advanced API Security helps you shine a light on your shadow APIs
2/ Learn how BMW uses Advanced API Security to protect their 250 APIs and 5 Bn API calls.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Workflows is a versatile service for orchestrating and automating microservices, business processes, data, and ML pipelines, including generative AI calls. Explore the benefits of using Workflows to orchestrate generative AI calls to Vertex AI. Dive into a demo showcasing how you can effortlessly orchestrate and parallelize generative AI calls, creating a map-reduce style workflow for summarizing large amounts of text.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
This session will provide an overview of designing VPC connectivity for segmenting sensitive applications for security and compliance reasons in Google Cloud. In particular this session covers how this can be achieved with distributed applications with a global use base and scenarios that leverage Cross Cloud Network (CCN). The session will be co-presented with a major global financial technology customer, a leader in the industry when it comes to building such distributed applications. Please note: seating is limited and on a first-come, first served basis; standing areas are available
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
More generative AI models are built on PyTorch than on any other framework. We partner with Lightricks to share how PyTorch/XLA offers a performant, automatic compiler experience with all the ease-of-use and ecosystem benefits of PyTorch. Learn from Hugging Face as they share more about the latest features that improve PyTorch/XLA performance and usability on GPUs and TPUs.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.