Delve into the intricate confluence of AI-driven search, software development, and online education and reflect upon its implications for developers, educators, and tech enthusiasts.
talk-data.com
Company
Speakers
347
Activities
377
Speakers from Google
Talks & appearances
377 activities from Google speakers
Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences, transforming every business. But there is no AI without Data. And only the right data delivers accurate, relevant results, with the context, scale and security you need.
"I Love AI" will unlock the power of Generative AI for you, with unique insights into the data platform and AI solutions you need, delivered by experts with real-world experience making AI a reality. This virtual event will help application architects, software developers, practitioners and CTOs learn how to:
- Deliver AI outcomes with extreme accuracy and relevance
- Build Generative AI apps with scale, governance and data security
- Overcome the biggest obstacles keeping Gen AI from being enterprise ready
- Deploy powerful vector search capabilities at a fraction of the cost
- Use cutting edge innovations in the biggest, most powerful vector database
Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences. But there is no AI without data. This session covers data platforms, governance, and cutting-edge vector search to enable enterprise AI.
Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences, transforming every business. But there is no AI without Data. And only the right data delivers accurate, relevant results, with the context, scale and security you need.
"I Love AI" will unlock the power of Generative AI for you, with unique insights into the data platform and AI solutions you need, delivered by experts with real-world experience making AI a reality. This virtual event will help application architects, software developers, practitioners and CTOs learn how to:
- Deliver AI outcomes with extreme accuracy and relevance
- Build Generative AI apps with scale, governance and data security
- Overcome the biggest obstacles keeping Gen AI from being enterprise ready
- Deploy powerful vector search capabilities at a fraction of the cost
- Use cutting edge innovations in the biggest, most powerful vector database
Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences. But there is no AI without data. This session covers data platforms, governance, and cutting-edge vector search to enable enterprise AI.
Session 1 (Americas/EMEA): August 23, 2023, 10AM PDT / 1PM EDT. Gen AI, LLMs, AI assistants and intelligent agents powering next-generation customer experiences. This session will cover data platforms, AI solutions, vector search capabilities, governance and data security.
Session 2 (APAC/EMEA): August 24, 2023, 10AM CEST / 1:30PM IST / 4PM SGT / 6PM AEST. Gen AI, LLMs, AI assistants and intelligent agents powering next-generation customer experiences. This session will cover data platforms, AI solutions, vector search capabilities, governance and data security.
Part 2: Hands-on Activity\nWe will develop, instrument, test, and debug a Service Weaver application.
Part 1: Service Weaver Presentation\nWhat is Service Weaver?\nService Weaver concepts.\nHow to develop, deploy, debug, and monitor a Service Weaver application.
The session will cover capabilities of data lineage in Apache Airflow, how to use them, and motivations for it. It will present the technical know-how of integrating data lineage solutions with Apache Airflow, and provisioning DAGs metadata to fuel lineage functionalities in a way transparent to the user, limiting the setup friction. It will include Google’s Cloud Composer lineage integration implemented through the current Airflow’s data lineage architecture, and our approach to the lineage evolution strategy.
Learn how to use Google's PaLM APIs for text generation, chat, and embeddings. Through this workshop, users will be able to go through an introduction to each of these APIs and understand what types of machine learning tasks they can be used for. This workshop will also cover an introductory use case for the embeddings and text generation APIs: Document Search with Q & A. Links to code will be provided.
Deploying bad DAGs to your Airflow environment can wreak havoc. This talk provides an opinionated take on a mono repo structure for GCP data pipelines leveraging BigQuery, Dataflow and a series of CI tests for validating your Airflow DAGs before deploying them to Cloud Composer. Composer makes deploying airflow infrastructure easy and deploying DAGs “just dropping files in a GCS bucket”. However, this opens the opportunity for many organizations to shoot themselves in the foot by not following a strong CI/CD process. Pushing bad dags to Composer can manifest in a really sad airflow webserver and many wasted DAG parsing cycles in the scheduler, disrupting other teams using the same environment. This talk will outline a series of recommended continuous integration tests to validate PRs for updating or deploying new Airflow DAGs before pushing them to your GCP Environment with a small “DAGs deployer” application that will manage deploying DAGs following some best practices. This talk will walk through explaining automating these tests with Cloud Build, but could easily be ported to your favorite CI/CD tool.
Go beyond the standard Google Workspace experience. Learn how to extend and integrate with the Workspace platform, and explore how Qodea and others have leveraged the Workspace platform to create custom solutions that have boosted efficiencies, driven productivity, and reduced user workloads.
The rise of AI-powered code generation tools presents a compelling alternative to traditional UI prototyping frameworks. This talk explores the question: Is it time to ditch the framework overhead and embrace core web technologies (such as HTML, CSS, JavaScript) for faster, more flexible prototyping? We’ll examine the trade-offs between structured frameworks and the granular control offered by a “bare metal” approach, augmented by AI assistance. Learn when leveraging AI with core tech becomes the smarter choice, enabling rapid iteration and bespoke UI designs, and when frameworks still reign supreme.
In this workshop, you will learn the fundamentals of infrastructure as code through guided exercises. You will be introduced to Pulumi and learn how to use programming languages to provision modern cloud infrastructure. This workshop is designed to help new users familiarize themselves with the core concepts needed to deploy resources effectively on Google Cloud. The basics of the Pulumi Programming Model; How to provision, update, and destroy GCP resources.
This workshop examines the critical lessons learned in deploying generative AI applications and explores how these insights can shape the future of the "AI Agents era." We'll delve into the practical challenges and solutions encountered in moving from initial answers to actionable implementations, prescribing how these learnings can be applied to build more sophisticated, autonomous, and impactful AI systems.
Learn how to provision, update, and destroy Google Cloud Platform (GCP) resources using Pulumi.
Join an insightful fireside chat with Jeff Dean, a pioneering force behind Google’s AI leadership. As Google's Chief Scientist at DeepMind & Research, Jeff will share his vision on AI and specialized AI hardware, including Cloud TPUs seventh generation chip; Ironwood. What exciting things might we expect this to power? What drives Google’s innovation in specialized AI hardware? In this spotlight, we’ll also discuss how TPUs enable efficient large-scale training and optimal inference workloads including exclusive, never-before-revealed details of Ironwood, differentiated chip designs, data center infrastructure, and software stack co-designs that makes Google Cloud TPUs the most compelling choice for AI workloads.
A hands-on workshop introducing infrastructure as code with Pulumi, using programming languages to provision modern cloud infrastructure on Google Cloud. Covers fundamentals of the Pulumi programming model, provisioning, updating, and destroying GCP resources, and serverless containers on Cloud Run.
Google Workspace customers are switching to Google Chat and Google Meet, AI-first collaboration tools that seamlessly integrate with the Workspace apps you use every day. Learn how they migrated from costly point solutions to enhance collaboration, improve data security, reduce costs, and unlock new levels of team productivity with Gemini.
Want to deploy generative AI across your organization but not sure how to keep your sensitive data secure and compliant? Join this session to hear from industry practitioners and Google experts about the best practices and lessons learned when embarking on this journey. We will demo how you can use built-in controls to identify sensitive data in your organization and restrict access to it and share insights, admin control recommendations, and lived customer experiences.
Leveraging years of experience building internal platforms at Google, this session offers actionable insights for Google Cloud customers on creating effective development ecosystems. Learn how to prioritize safety, efficiency, and reliability through the interplay of platform engineering, feature development, and Site Reliability Engineering (SRE). Whether you’re a seasoned Google Cloud user or new to platform engineering, this talk can help you optimize software delivery, operations, and achieve your business goals.
Boost your productivity with Gemini Code Assist tools. This session demonstrates how to seamlessly integrate your daily tools – source code management, task management, Google Drive, and more – directly into your integrated development environment with Gemini Code Assist chat. Discover the latest Gemini Code Assist features and capabilities, learn best practices for integrating AI into your software development workflow, gain insights into modernizing legacy codebases, and learn how to improve code quality and accelerate development cycles.
Learn the basics of the Pulumi Programming Model.
Learn how to write powerful, multi-turn AI prompts, and how our most capable AI model, Gemini, can help you do your best work. We’ll cover the advanced capabilities of Gemini for Google Workspace and share examples of effective prompts, whether you’re using apps like Gemini, Gmail, Docs, Sheets, Drive, Vids, or NotebookLM. We’ll cover how large language models (LLMs) work and how outputs are based on the quality of an input. You’ll leave this session empowered to prompt like a pro.