talk-data.com talk-data.com

Topic

Kubernetes

container_orchestration devops microservices

560

tagged

Activity Trend

40 peak/qtr
2020-Q1 2026-Q2

Activities

560 activities · Newest first

Ditch legacy and embrace freedom with AlloyDB Omni, your hybrid and multicloud enterprise database. Run anywhere, from data centers to the public clouds of your choice, and unlock performance and ease of management. Elevate your apps with HTAP and built-in generative AI to build vector embeddings for lightning-fast search, remotely or locally – no connectivity needed. Simplify operations with the Kubernetes operator: automate lifecycle, HA/DR, and scale effortlessly. Learn more about AlloyDB Omni and supercharge your data strategy, anywhere.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

The number of clusters running data apps on Google Kubernetes Engine has grown exponentially, doubling every year since 2019. With the rise of AI/ML along with accelerated compute, data architectures are gaining importance. Join this session to learn about Kubernetes data architectures for AI/ML, storage best practices, data availability and customer use cases. This session is meant to educate you about retooling your skill set for the new paradigm of data on Kubernetes.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how Google Cloud’s backup and storage services secure and protect your data from a variety of threats, such as ransomware, outages, and user errors. Our backup services protect VMs, databases (such as SAP HANA), and Google Kubernetes Engine environments. Expand threat detection capabilities by alerting on suspicious activities around backup through Security Command Center. We’ll also dive into Cloud Storage and our industry-leading turbo replication for dual-region deployments, soft delete, versioning, and more to protect your data.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, you'll learn how the platform team can provide a multi-tenant traffic-management infrastructure to optimize performance, route efficiently to reduce costs, and simplify operations. We'll demonstrate how multi-cluster service and multi-cluster Gateway can be used to abstract the cloud infrastructure and how Google Kubernetes Engine (GKE) Enterprise can empower the platform team managing fleets of GKE clusters and teams consuming those clusters.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Text-to-image generative AI models such as the Stable Diffusion family of models are rapidly growing in popularity. In this session, we explain how to optimize every layer of your serving architecture – including TPU accelerators, orchestration, model server, and ML framework – to gain significant improvements in performance and cost effectiveness. We introduce many new innovations in Google Kubernetes Engine that improve the cost effectiveness of AI inference, and we provide a deep dive into MaxDiffusion, a brand new library for deploying scalable stable diffusion workloads on TPUs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, you’ll learn how to deploy a fully-functional Retrieval-Augmented Generation (RAG) application to Google Cloud using open-source tools and models from Ray, HuggingFace, and LangChain. You’ll learn how to augment it with your own data using Ray on Google Kubernetes Engine (GKE) and Cloud SQL’s pgvector extension, deploy any model from HuggingFace to GKE, and rapidly develop your LangChain application on Cloud Run. After the session, you’ll be able to deploy your own RAG application and customize it to your needs.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

HSBC has a large number of legacy IBM WebSphere applications that are costly to maintain and pose a compliance risk. This session will discuss how HSBC built a “migration factory” to help developers platform existing Websphere applications to Google Kubernetes Engine (GKE). The benefits of migrating these existing applications to GKE include reduced operational costs, improved compliance, increased scalability, faster application development, and improved security. Come learn exactly how HSBC did it and how you can replicate their process and success.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how platform engineering can provide multi-tenant traffic management to optimize performance, route efficiently to reduce costs, and simplify network operations. We‘ll demonstrate how multi-cluster services and multi-cluster gateways can be used to abstract the infrastructure for developers.

Learn from Shopify how they built their large-scale Kubernetes network to support 61 million shoppers and $9.3B in sales during Black Friday.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

For years now, I wanted to get my hands into Golang. The main issue was always time and a missing project fit. Gemini enabled me to learn in one evening the concepts of Golang and helped me create a small tool, that is helping in creating the Last Week In Kubernetes Development Newsletter. Explore with me what Gemini can do and how it can help you learn a new programming language by contributing to Open Source projects. Let’s also take a look at pitfalls and limits to the system and how you can work around them.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how Google Kubernetes Engine (GKE) Autopilot helped redesign Ubie's microservice platform. Ubie offers AI-based health tech products in Japan and the U.S. Since initially adopting Google Cloud six years ago, Ubie experienced growth-related challenges, particularly in reliability, security, and privacy. In this session, we delve into the strategic decision to employ GKE Autopilot in Ubie's transformation journey of re-architecture.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

If you’re a data engineer, MLOps engineer or procurement officer planning to purchase third-party AI models, you won’t want to miss this. Learn how you can speed assessment, facilitate procurement, and simplify governance of AI models (including generative AI) on Google Cloud Marketplace. Explore how to easily procure and deploy third-party AI models and frameworks to both Vertex AI and Google Kubernetes Engine. Finally, you’ll learn from Anthropic, who dive into how their solution deploys via Marketplace to Vertex AI.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

The increased adoption of Kubernetes and containerized workloads has brought about security challenges for enterprises. Malware and backdoors can pose significant risks to the underlying infrastructure in a Kubernetes cluster, potentially leading to cyber disasters. To address these challenges, there is a growing trend towards using a single tool to secure all applications running in the cloud, focusing on the shift left approach, while also securing the underlying infrastructure and assets in runtime. In this session, we’ll understand the security risks in Kubernetes and how Prisma Cloud can help solve them with a consolidated platform approach, with time for audience Q&A. By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this talk, we delve into the complexities of building enterprise AI applications, including customization, evaluation, and inference of large language models (LLMs). We start by outlining the solution design space and presenting a comprehensive LLM evaluation methodology. Then, we review state-of-the-art LLM customization techniques, introduce NVIDIA Inference Microservice (NIM) and a suite of cloud-native NVIDIA NeMo microservices for ease of LLM deployment and operation on Google Kubernetes Engine (GKE). We conclude with a live demo, followed by practical recommendations for enterprises.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how Citadel’s fixed income fund powers their daily financial activities. First, we’ll explore the challenges of calculating profit and loss across thousands of positions, back-testing models and running trading strategies. Then we’ll discuss developing a versatile platform that bursts to thousands of workers while also handling real-time calculations. Finally, we’ll present challenges encountered and give insight on practical solutions teams can apply to their own cloud compute infrastructures.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

With Google’s new family of state-of-the-art, lightweight, and easy-to-use open models, Google Cloud is the best place to create great AI-powered experiences. In this talk, we will go over how to leverage Gemma's model's full potential with Vertex AI and Google Kubernetes Engine, including optimized performance on Google Cloud TPUs and GPUs, and show how they can easily be used and empower your team to succeed.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Internal Developer Platforms (IDPs) are revolutionizing how engineering teams work by streamlining workflows and boosting developer productivity. But building an IDP requires a robust, scalable foundation. In this talk, we'll show you how Google Kubernetes Engine (GKE) Enterprise serves as the perfect launchpad for your IDP journey. Get ready for a hands-on demo and deep dive that will show you how GKE Enterprise simplifies IDP development with built-in security, compliance controls, and multi-cluster management.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how to optimize cloud-based file storage for various workloads. We‘ll cover Filestore and Google Cloud NetApp Volumes – fully managed Network File System and SMB solutions that balance performance, availability, and cost. We'll explore new Filestore features for modern workloads (Zonal Google Kubernetes Engine integration via the CSI Driver, protecting your data from regional failures) and how NetApp Volumes satisfies Windows workloads as well as PB scale enterprise workloads.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Best practices for ETL with Apache NiFi on Kubernetes by Albert Lewandowski

Big Data Europe Onsite and online on 22-25 November in 2022 Learn more about the conference: https://bit.ly/3BlUk9q

Join our next Big Data Europe conference on 22-25 November in 2022 where you will be able to learn from global experts giving technical talks and hand-on workshops in the fields of Big Data, High Load, Data Science, Machine Learning and AI. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.

Implementing generative AI applications requires large amounts of computation that can seamlessly scale to train, fine-tune, and serve the models. NVIDIA and Google Cloud have partnered to offer a range of GPU options to address this challenge. Using NVIDIA GPUs with Google Kubernetes Engine removes the heavy lifting needed to set up AI deployments, automate orchestration, manage large training clusters, and serve low-latency inference. Join us to see what ElevenLabs has built using NVIDIA GPUs with GKE. Please note: seating is limited and on a first-come, first served basis; standing areas are available

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Learn how the patent search engine company IPRally created a custom compute platform to enable higher scale data processing and deep learning. The solution relies on Ray Core and Google Kubernetes Engine, and harvests the cheapest resources from all around the world. In addition to the efficiency, the goal was to build the best environment for machine learning R&D. This has been achieved with integration to Weights&Biases as the experiment tracking system. In this session, we’ll go through on a high level the solution. Please note: seating is limited and on a first-come, first served basis; standing areas are available

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.