talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"Interview IntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact Info LinkedInWebsiteParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links Aigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

With seemingly every organization wanting to enhance their AI capabilities, questions arise about who should be in charge of these initiatives. At the moment, it’s likely a CTO, CIO, or CDO, or a mixture of the three. The gold standard is to have someone in the C-suite whose sole focus is their AI projects: the Chief AI Officer. This role is so new that it's not yet widely understood. In this episode, we explore what the CAIO job entails. Philipp Herzig is the Chief AI Officer at SAP. He’s held a variety of roles within SAP, most recently SVP Head of Cross Product Engineering & Experience, however his experience covers intelligent enterprise & cross-architecture, head of engineering for cloud-native apps, a software development manager, and product owner.  In the full episode, Richie and Philipp explore what his day-to-day responsibilities are as a CAIO, the holistic approach to cross-team collaboration, non-technical interdepartmental work, AI strategy and implementation, challenges and success metrics, how to approach high-value AI use cases, insights into current AI developments and the importance of continuous learning, the exciting future of AI and much more. 

Links Mentioned in the Show: SAP’s AI CoPilot JouleSAP[Course] Implementing AI Solutions in BusinessRelated Episode: How Walmart Leverages Data & AI with Swati Kirti, Sr Director of Data Science at WalmartRewatch sessions from RADAR: The Analytics Edition

New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Data Engineering with Google Cloud Platform - Second Edition

Data Engineering with Google Cloud Platform is your ultimate guide to building scalable data platforms using Google Cloud technologies. In this book, you will learn how to leverage products such as BigQuery, Cloud Composer, and Dataplex for efficient data engineering. Expand your expertise and gain practical knowledge to excel in managing data pipelines within the Google Cloud ecosystem. What this Book will help me do Understand foundational data engineering concepts using Google Cloud Platform. Learn to build and manage scalable data pipelines with tools such as Dataform and Dataflow. Explore advanced topics like data governance and secure data handling in Google Cloud. Boost readiness for Google Cloud data engineering certification with real-world exam guidance. Master cost-effective strategies and CI/CD practices for data engineering on Google Cloud. Author(s) Adi Wijaya, the author of this book, is a Data Strategic Cloud Engineer at Google with extensive experience in data engineering and the Google Cloud ecosystem. With his hands-on expertise, he emphasizes practical solutions and in-depth knowledge sharing, guiding readers through the intricacies of Google Cloud for data engineering success. Who is it for? This book is ideal for data analysts, IT practitioners, software engineers, and data enthusiasts aiming to excel in data engineering. Whether you're a beginner tackling fundamental concepts or an experienced professional exploring Google Cloud's advanced capabilities, this book is designed for you. It bridges your current skills with modern data engineering practices on Google Cloud, making it a valuable resource at any stage of your career.

IBM Storage FlashSystem 5200 Product Guide for IBM Storage Virtualize 8.6

This IBM® Redpaper® Product Guide publication describes the IBM Storage FlashSystem® 5200 solution, which is a next-generation IBM Storage FlashSystem control enclosure. It is an NVMe end-to-end platform that is targeted at the entry and midrange market and delivers the full capabilities of IBM FlashCore® technology. It also provides a rich set of software-defined storage (SDS) features that are delivered by IBM Storage Virtualize, including the following features: Data reduction and deduplication Dynamic tiering Thin provisioning Snapshots Cloning Replication Data copy services Transparent Cloud Tiering IBM HyperSwap® including 3-site replication for high availability (HA) Scale-out and scale-up configurations further enhance capacity and throughput for better availability. The IBM Storage FlashSystem 5200 is a high-performance storage solution that is based on a revolutionary 1U form factor. It consists of 12 NVMe Flash Devices in a 1U storage enclosure drawer with full redundant canister components and no single point of failure. It is designed for businesses of all sizes, including small, remote, branch offices and regional clients. It is a smarter, self-optimizing solution that requires less management, which enables organizations to overcome their storage challenges. Flash has come of age and price point reductions mean that lower parts of the storage market are seeing the value of moving over to flash and NVMe--based solutions. The IBM Storage FlashSystem 5200 advances this transition by providing incredibly dense tiers of flash in a more affordable package. With the benefit of IBM FlashCore Module compression and new QLC flash-based technology becoming available, a compelling argument exists to move away from Nearline SAS storage and on to NVMe. This Product Guide is aimed at pre-sales and post-sales technical support and marketing and storage administrators.

IBM Storage FlashSystem 9500 Product Guide for IBM Storage Virtualize 8.6

This IBM® Redpaper® Product Guide describes the IBM Storage FlashSystem® 9500 solution, which is a next-generation IBM Storage FlashSystem control enclosure. It combines the performance of flash and a Non-Volatile Memory Express (NVMe)-optimized architecture with the reliability and innovation of IBM FlashCore® technology and the rich feature set and high availability (HA) of IBM Storage Virtualize. Often, applications exist that are foundational to the operations and success of an enterprise. These applications might function as prime revenue generators, guide or control important tasks, or provide crucial business intelligence, among many other jobs. Whatever their purpose, they are mission critical to the organization. They demand the highest levels of performance, functionality, security, and availability. They also must be protected against the newer threat of cyberattacks. To support such mission-critical applications, enterprises of all types and sizes turn to the IBM Storage FlashSystem 9500. IBM Storage FlashSystem 9500 provides a rich set of software-defined storage (SDS) features that are delivered by IBM Storage Virtualize, including the following examples: Data reduction and deduplication Dynamic tiering Thin-provisioning Snapshots Cloning Replication and data copy services Cyber resilience Transparent Cloud Tiering IBM HyperSwap® including 3-site replication for HA Scale-out and scale-up configurations that further enhance capacity and throughput for better availability This Redpaper applies to IBM Storage Virtualize V8.6.

Summary Generative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developersInterview IntroductionHow did you get involved in machine learning?Can you describe what Pieces is and the story behind it?The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?model selectionsarchitecture of Pieces applicationlocal vs. hybrid vs. online modelsmodel update/delivery processdata preparation/serving for models in context of Pieces appapplication of AI to developer workflowstypes of workflows that people are building with piecesWhat are the most interesting, innovative, or unexpected ways that you have seen Pieces used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?When is Pieces the wrong choice?What do you have planned for the future of Pieces?Contact Info LinkedInParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links PiecesNPU == Neural Processing UnitTensor ChipLoRA == Low Rank AdaptationGenerative Adversarial NetworksMistralEmacsVimNeoVimDartFlutte

Ao vivo e a cores, aterrissamos no The Developer’s Conference (mais conhecido como TDC), que teve sua primeira edição com foco em Inteligência Artificial, em São Paulo.

E para responder a pergunta se “Ainda vale a pena aprender a programar, com os avanços da AI ?", devido a discussão nos últimos meses, sobre a substituição de desenvolvedores foi reacendida por episódios como a fala do CEO da Nvidia, Jensen Huang, que foi interpretada por alguns veículos como “Pare de ensinar crianças a programar”. Além desse, tivemos o surgimento do DevIn, uma IA desenvolvedora de código.

Neste episódio especial do Data Hackers — a maior comunidade de AI e Data Science do Brasil-, conheçam: Andrea Longarini, Professora de IA no Mackenzie e Cloud Solutions Architect na Microsoft; e Danilo Vitoriano, criador de conteúdo e embaixador da Woovi.

Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Caso queira, você também pode ouvir o episódio aqui no post mesmo!

Falamos no episódio

Conheça as pessoas convidadas:

Andrea Longarini, Professora de IA no Mackenzie e Cloud Solutions Architect na Microsoft;  Danilo Vitoriano, criador de conteúdo e embaixador da Woovi.

Nossa Bancada Data Hackers:

Paulo Vasconcellos — Co-founder Monique Femme — Head of Community Management

Referências:

Baixe o relatório completo do State of Data Brazil 2023 : https://stateofdata.datahackers.com.br/ Inscreva-se na Newsletter Data Hackers: https://www.datahackers.news/

Databases are ubiquitous, and you don’t need to be a data practitioner to know that all data everywhere is stored in a database—or is it? While the majority of data around the world lives in a database, the data that helps run the heart of our operating systems—the core functions of our computers— is not stored in the same place as everywhere else. This is due to database storage sitting ‘above’ the operating system, requiring the OS to run before the databases can be used. But what if the OS was built ‘on top’ of a database? What difference could this fundamental change make to how we use computers? Mike Stonebraker is a distinguished computer scientist known for his foundational work in database systems, he is also currently CTO & Co-Founder At DBOS. His extensive career includes significant contributions through academic prototypes and commercial startups, leading to the creation of several pivotal relational database companies such as Ingres Corporation, Illustra, Paradigm4, StreamBase Systems, Tamr, Vertica, and VoltDB. Stonebraker's role as chief technical officer at Informix and his influential research earned him the prestigious 2014 Turing Award. Stonebraker's professional journey spans two major phases: initially at the University of California, Berkeley, focusing on relational database management systems like Ingres and Postgres, and later, from 2001 at the Massachusetts Institute of Technology (MIT), where he pioneered advanced data management techniques including C-Store, H-Store, SciDB, and DBOS. He remains a professor emeritus at UC Berkeley and continues to influence as an adjunct professor at MIT’s Computer Science and Artificial Intelligence Laboratory. Stonebraker is also recognized for his editorial work on the book "Readings in Database Systems." In the episode, Richie and Mike explore the the success of PostgreSQL, the evolution of SQL databases, the shift towards cloud computing and what that means in practice when migrating to the cloud, the impact of disaggregated storage, software and serverless trends, the role of databases in facilitating new data and AI trends, DBOS and it’s advantages for security, and much more.  Links Mentioned in the Show: DBOSPaper: What Goes Around Comes Around[Course] Understanding Cloud ComputingRelated Episode: Scaling Enterprise Analytics with Libby Duane Adams, Chief Advocacy Officer and Co-Founder of AlteryxRewatch sessions from RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary

Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Andrew Lee about his work on Shortwave, an AI powered email client

Interview

Introduction How did you get involved in the area of data management? Can you describe what Shortwave is and the story behind it?

What is the core problem that you are addressing with Shortwave?

Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic? What are the strengths that email maintains as a protocol and ecosystem? From a product perspective, what are the data challenges that are posed by email? Can you describe how you have architected the Shortwave platform?

How have the design and goals of the product changed since you started it? What are the ways that the advent and evolution of language models have influenced your product roadmap?

How do you manage the personalization of the AI functionality in your system for each user/team? For users and teams who are using Shortwave, how does it change their workflow and communication patterns? Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes? What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave? When is Shortwave the wrong choice? What do you have planned for the future of Shortwave?

Contact Info

LinkedIn Blog

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with mach

I am excited to welcome Harry Carr, CEO of Vicinity, to the podcast on this week's episode. Vicnity is a software company that is accelerating the time it takes to access remote data sources from anywhere. Harry talks about why hybrid cloud exists, examples of accelerated data access from the edge, and what the future of data architecturtes looks like.

data #podcast #ai #artificialintelligence #datascience #datasets #episode #dataarchitecture #edge #cloud #hybridcloud #edgecomputing

Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.

Kartik Derasari is a technical consultant with a passion for technology and innovation. As a 6X Google Cloud Certified Professional, he has extensive experience in application development and analytics projects as a full-stack engineer. In addition to his professional work, Kartik is an advocate for the use of technology to drive business growth and innovation. He is the leader of the Go…

Summary

Databases come in a variety of formats for different use cases. The default association with the term "database" is relational engines, but non-relational engines are also used quite widely. In this episode Oren Eini, CEO and creator of RavenDB, explores the nuances of relational vs. non-relational engines, and the strategies for designing a non-relational database.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold. Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Oren Eini about the work of designing and building a NoSQL database engine

Interview

Introduction How did you get involved in the area of data management? Can you describe what constitutes a NoSQL database?

How have the requirements and applications of NoSQL engines changed since they first became popular ~15 years ago?

What are the factors that convince teams to use a NoSQL vs. SQL database?

NoSQL is a generalized term that encompasses a number of different data models. How does the underlying representation (e.g. document, K/V, graph) change that calculus?

How have the evolution in data formats (e.g. N-dimensional vectors, point clouds, etc.) changed the landscape for NoSQL engines? When designing and building a database, what are the initial set of questions that need to be answered?

How many "core capabilities" can you reasonably design around before they conflict with each other?

How have you approached the evolution of RavenDB as you add new capabilities and mature the project?

What are some of the early decisions that had to be unwound to enable new capabilities?

If you were to start from scratch today, what database would you build? What are the most interesting, innovative, or unexpected ways that you have seen RavenDB/NoSQL databases used? What are the most interesting, unexpected, or challenging lessons t

Empower your organization to achieve greater efficiency and solve critical business challenges with Google AppSheet's innovative no-code platform. This exclusive panel features industry leaders who achieved remarkable results by using AppSheet to streamline workflows and empower their teams. Learn first-hand their inspiring journeys, gain practical tips, and unlock the full potential of AppSheet to drive growth and innovation within your own organization.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Join the exclusive Korean session in Las Vegas for three days (April 9-11) packed with essential insights on technical trends, business use cases, and exciting showcases. Google Cloud Korea will provide a summary of key highlights, tailored specifically for Korean guests. We'll cover three captivating topics, followed by an exclusive networking dinner and Q&A session.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Retrieval Augmented Generation (RAG) is a powerful technique to provide real time, domain-specific context to the LLM to improve accuracy of responses. RAG doesn't require the addition of sensitive data to the model, but still requires application developers to address security and privacy of user and company data. In this session, you will learn about security implications of RAG workloads and how to architect your applications to handle user identity and to control data access.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Discover the transformative synergy between SAP Datasphere and Google BigQuery, driving data insights. We'll explore Datasphere's transformation, integration, and data governance capabilities alongside Big Query’s scalability and real-time analytics process. Also learn how SAP GenAI Hub and Google Cloud accelerate AI initiatives and innovation. You will also hear real-world success stories on how businesses leverage this integration for tangible outcomes.

By attending this session, your contact information may be shared with the sponsor for relevant follow up for this event only. Please note: seating is limited and on a first-come, first served basis; standing areas are available

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Prompt management is like the invisible conductor of your artificial intelligence (AI) orchestra. It's the science of crafting, organizing, and optimizing the prompts that guide your AI models to perform their best. The session will cover the end-to-end prompt lifecycle in Vertex AI Studio to help you reach production sooner. This includes new rapid evaluation within Vertex and the ability to "critique" model response for automatic prompt improvements.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this session, we'll dive into deploying Java apps using Google Cloud's serverless platform. Designed for Java developers, it offers practical insights into consideration, challenges, tips and tricks for deploying JVM applications in Serverless platforms. We’ll also cover other best practices across different part of the application lifecycle, such as CI/CD pipelines, security, and observability. Through interactive demos, learn to build, secure, and monitor Java applications efficiently.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Want to use Dart on the server to share code and complement your Flutter app? Learn about Serverpod, Flutter's full-stack Dart solution that uses code generation to create matching client-server code and a feature-rich Postgres ORM based on your schema.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.