talk-data.com
People (79 results)
See all 79 →Companies (4 results)
See all 4 →Activities & events
| Title & Speakers | Event |
|---|---|
|
ML and Generative AI in the Data Lakehouse
2026-01-25
Bennie Haelen
– author
In today's race to harness generative AI, many teams struggle to integrate these advanced tools into their business systems. While platforms like GPT-4 and Google's Gemini are powerful, they aren't always tailored to specific business needs. This book offers a practical guide to building scalable, customized AI solutions using the full potential of data lakehouse architecture. Author Bennie Haelen covers everything from deploying ML and GenAI models in Databricks to optimizing performance with best practices. In this must-read for data professionals, you'll gain the tools to unlock the power of large language models (LLMs) by seamlessly combining data engineering and data science to create impactful solutions. Learn to build, deploy, and monitor ML and GenAI models on a data lakehouse architecture using Databricks Leverage LLMs to extract deeper, actionable insights from your business data residing in lakehouses Discover how to integrate traditional ML and GenAI models for customized, scalable solutions Utilize open source models to control costs while maintaining model performance and efficiency Implement best practices for optimizing ML and GenAI models within the Databricks platform |
O'Reilly AI & ML Books
|
|
(Online) From Raw to Refined: Building Production Data Pipelines That Scale
2026-01-21 · 18:30
This is an Online event, the Teams link will be published on the right of this page for those who have registered. 18:30: From Raw to Refined: Building Production Data Pipelines That Scale - Pradeep Kalluri 19:55 Prize Draw - Packt eBooks Session details: From Raw to Refined: Building Production Data Pipelines That Scale - Pradeep Kalluri Every organization needs to move data from source systems to analytics platforms, but most teams struggle with reliability at scale. In this talk, I'll share the three-zone architecture pattern I use to build production data pipelines that process terabytes daily while maintaining data quality and operational simplicity. You'll learn: - Why the traditional "single pipeline" approach breaks at scale - How to structure pipelines using Raw\, Curated\, and Refined zones - Practical patterns for handling batch and streaming data with Kafka and Spark - Real incidents and lessons learned from production systems - Tools and technologies that work (PySpark\, Airflow\, Snowflake) This isn't theory—it's battle-tested patterns from years of building data platforms. Whether you're designing your first data pipeline or scaling an existing platform, you'll walk away with actionable techniques you can apply immediately. Speaker: Pradeep Kalluri Data Engineer \| NatWest \| Building Scalable Data Platforms Data Engineer with 3+ years of experience building production data platforms at NatWest, Accenture, and Capgemini. Specialized in cloud-native architectures, real-time processing with Kafka and Spark, and data quality frameworks. Published technical writer on Medium, sharing practical lessons from production systems. Passionate about making data platforms reliable and trustworthy. |
(Online) From Raw to Refined: Building Production Data Pipelines That Scale
|
|
Google Cloud AI Governance: Practical Policies, Guardrails, and Audits
2026-01-21 · 17:00
Free Live Webinar - Google Cloud AI Governance: Practical Policies, Guardrails, and Audits Build trustable AI on Google Cloud. Learn practical policies, guardrails, and evidence patterns, IAM, data protection, VPC SC, CMEK, DLP, evaluations, and audit logs, for governed, production-ready AI (Vertex AI + Gemini) that passes reviews and scales. Click on the link to complete your registration : LINK |
Google Cloud AI Governance: Practical Policies, Guardrails, and Audits
|
|
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
2025-12-22 · 14:00
This two-part discussion series will explore how to make humanitarian spreadsheets more “AI-ready,” bringing together UN OCHA’s new guidance project with real-world lessons from recent AI spreadsheet extraction experiments. UN OCHA is developing a short, practical guide to help humanitarian teams publish “AI-ready” public datasets that work better with tools like ChatGPT, Copilot, Gemini and open source models like Kimi K2 and GPT OSS running on providers like Groq when users simply upload a CSV or Excel file and start asking questions. The focus is on non-technical users who will not configure agents, write code, or reverse-engineer cryptic column names, but instead expect the AI to correctly interpret the file structure and labels out of the box. By recommending clear naming, consistent tabular layouts, and lightweight documentation, the guidance aims to reduce misinterpretation, hallucinations, and broken analyses when consumer AI tools encounter real-world humanitarian data. Jan Zheng, a Developer Relations Engineer at Groq who helps people design and build AI prototypes, is exploring exactly these challenges from the model and tooling side. His recent experiments with spreadsheet extraction show that messy, multi-table spreadsheets routinely confuse even advanced models and agent frameworks, leading to unreliable extraction, off‑by‑one errors, looping agents, and high costs. These problems are amplified when complex datasets or vast amounts data are processed by non-technical users of commercial AI tools and open models. Lessons learned through research and usage can inform UN OCHA guidance by clarifying which spreadsheet patterns break current AI tools, which structures make extraction more robust, and how to balance “ideal” AI-ready formats with the messy realities of operational humanitarian spreadsheets. Over two separate meetup discussions, staff from UN OCHA will introduce the AI‑ready data project in more detail, walk through the specific use case they are targeting, and answer questions from participants about scope, constraints, and potential applications in humanitarian settings. These sessions are designed to surface real-world experiences from practitioners who publish, manage, or use open humanitarian data, and to gather concrete examples of what works and what breaks when datasets are run through consumer AI tools and open source tools running through providers like Groq. On a following date, Jan will join a dedicated session to react to the project, share his experimental findings on spreadsheet extraction, and discuss how infrastructure choices such as model selection, speed, and prompting strategies interact with the way humanitarian data is structured and published. His perspective will help bridge the gap between guidance aimed at data publishers and the realities of building and tuning AI systems that can reliably interpret messy, real-world spreadsheets used across the humanitarian sector. |
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
|
|
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
2025-12-22 · 14:00
This two-part discussion series will explore how to make humanitarian spreadsheets more “AI-ready,” bringing together UN OCHA’s new guidance project with real-world lessons from recent AI spreadsheet extraction experiments. UN OCHA is developing a short, practical guide to help humanitarian teams publish “AI-ready” public datasets that work better with tools like ChatGPT, Copilot, Gemini and open source models like Kimi K2 and GPT OSS running on providers like Groq when users simply upload a CSV or Excel file and start asking questions. The focus is on non-technical users who will not configure agents, write code, or reverse-engineer cryptic column names, but instead expect the AI to correctly interpret the file structure and labels out of the box. By recommending clear naming, consistent tabular layouts, and lightweight documentation, the guidance aims to reduce misinterpretation, hallucinations, and broken analyses when consumer AI tools encounter real-world humanitarian data. Jan Zheng, a Developer Relations Engineer at Groq who helps people design and build AI prototypes, is exploring exactly these challenges from the model and tooling side. His recent experiments with spreadsheet extraction show that messy, multi-table spreadsheets routinely confuse even advanced models and agent frameworks, leading to unreliable extraction, off‑by‑one errors, looping agents, and high costs. These problems are amplified when complex datasets or vast amounts data are processed by non-technical users of commercial AI tools and open models. Lessons learned through research and usage can inform UN OCHA guidance by clarifying which spreadsheet patterns break current AI tools, which structures make extraction more robust, and how to balance “ideal” AI-ready formats with the messy realities of operational humanitarian spreadsheets. Over two separate meetup discussions, staff from UN OCHA will introduce the AI‑ready data project in more detail, walk through the specific use case they are targeting, and answer questions from participants about scope, constraints, and potential applications in humanitarian settings. These sessions are designed to surface real-world experiences from practitioners who publish, manage, or use open humanitarian data, and to gather concrete examples of what works and what breaks when datasets are run through consumer AI tools and open source tools running through providers like Groq. On a following date, Jan will join a dedicated session to react to the project, share his experimental findings on spreadsheet extraction, and discuss how infrastructure choices such as model selection, speed, and prompting strategies interact with the way humanitarian data is structured and published. His perspective will help bridge the gap between guidance aimed at data publishers and the realities of building and tuning AI systems that can reliably interpret messy, real-world spreadsheets used across the humanitarian sector. |
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
|
|
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
2025-12-22 · 14:00
This two-part discussion series will explore how to make humanitarian spreadsheets more “AI-ready,” bringing together UN OCHA’s new guidance project with real-world lessons from recent AI spreadsheet extraction experiments. UN OCHA is developing a short, practical guide to help humanitarian teams publish “AI-ready” public datasets that work better with tools like ChatGPT, Copilot, Gemini and open source models like Kimi K2 and GPT OSS running on providers like Groq when users simply upload a CSV or Excel file and start asking questions. The focus is on non-technical users who will not configure agents, write code, or reverse-engineer cryptic column names, but instead expect the AI to correctly interpret the file structure and labels out of the box. By recommending clear naming, consistent tabular layouts, and lightweight documentation, the guidance aims to reduce misinterpretation, hallucinations, and broken analyses when consumer AI tools encounter real-world humanitarian data. Jan Zheng, a Developer Relations Engineer at Groq who helps people design and build AI prototypes, is exploring exactly these challenges from the model and tooling side. His recent experiments with spreadsheet extraction show that messy, multi-table spreadsheets routinely confuse even advanced models and agent frameworks, leading to unreliable extraction, off‑by‑one errors, looping agents, and high costs. These problems are amplified when complex datasets or vast amounts data are processed by non-technical users of commercial AI tools and open models. Lessons learned through research and usage can inform UN OCHA guidance by clarifying which spreadsheet patterns break current AI tools, which structures make extraction more robust, and how to balance “ideal” AI-ready formats with the messy realities of operational humanitarian spreadsheets. Over two separate meetup discussions, staff from UN OCHA will introduce the AI‑ready data project in more detail, walk through the specific use case they are targeting, and answer questions from participants about scope, constraints, and potential applications in humanitarian settings. These sessions are designed to surface real-world experiences from practitioners who publish, manage, or use open humanitarian data, and to gather concrete examples of what works and what breaks when datasets are run through consumer AI tools and open source tools running through providers like Groq. On a following date, Jan will join a dedicated session to react to the project, share his experimental findings on spreadsheet extraction, and discuss how infrastructure choices such as model selection, speed, and prompting strategies interact with the way humanitarian data is structured and published. His perspective will help bridge the gap between guidance aimed at data publishers and the realities of building and tuning AI systems that can reliably interpret messy, real-world spreadsheets used across the humanitarian sector. |
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
|
|
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
2025-12-22 · 14:00
This two-part discussion series will explore how to make humanitarian spreadsheets more “AI-ready,” bringing together UN OCHA’s new guidance project with real-world lessons from recent AI spreadsheet extraction experiments. UN OCHA is developing a short, practical guide to help humanitarian teams publish “AI-ready” public datasets that work better with tools like ChatGPT, Copilot, Gemini and open source models like Kimi K2 and GPT OSS running on providers like Groq when users simply upload a CSV or Excel file and start asking questions. The focus is on non-technical users who will not configure agents, write code, or reverse-engineer cryptic column names, but instead expect the AI to correctly interpret the file structure and labels out of the box. By recommending clear naming, consistent tabular layouts, and lightweight documentation, the guidance aims to reduce misinterpretation, hallucinations, and broken analyses when consumer AI tools encounter real-world humanitarian data. Jan Zheng, a Developer Relations Engineer at Groq who helps people design and build AI prototypes, is exploring exactly these challenges from the model and tooling side. His recent experiments with spreadsheet extraction show that messy, multi-table spreadsheets routinely confuse even advanced models and agent frameworks, leading to unreliable extraction, off‑by‑one errors, looping agents, and high costs. These problems are amplified when complex datasets or vast amounts data are processed by non-technical users of commercial AI tools and open models. Lessons learned through research and usage can inform UN OCHA guidance by clarifying which spreadsheet patterns break current AI tools, which structures make extraction more robust, and how to balance “ideal” AI-ready formats with the messy realities of operational humanitarian spreadsheets. Over two separate meetup discussions, staff from UN OCHA will introduce the AI‑ready data project in more detail, walk through the specific use case they are targeting, and answer questions from participants about scope, constraints, and potential applications in humanitarian settings. These sessions are designed to surface real-world experiences from practitioners who publish, manage, or use open humanitarian data, and to gather concrete examples of what works and what breaks when datasets are run through consumer AI tools and open source tools running through providers like Groq. On a following date, Jan will join a dedicated session to react to the project, share his experimental findings on spreadsheet extraction, and discuss how infrastructure choices such as model selection, speed, and prompting strategies interact with the way humanitarian data is structured and published. His perspective will help bridge the gap between guidance aimed at data publishers and the realities of building and tuning AI systems that can reliably interpret messy, real-world spreadsheets used across the humanitarian sector. |
Making Humanitarian Data AI‑Ready: Inside UN OCHA’s New Guidance Project
|
|
Outclassing Frontier LLMs at Extracting Information
2025-12-22 · 11:55
Etienne Bernard
– Co-founder & CEO
@ NuMind
Accurately extracting information from documents has been a decades-old dream. Important workflows — from automated back-office processing to enterprise RAG — depend on it. LLMs promise to fulfill this dream but currently fall short: they hallucinate information, struggle with long documents, and break down on complex layouts. The solution: LLMs specialized in information extraction. In this talk, I will present: NuExtract — the first LLM specialized in extracting structured information (JSON output); NuMarkdown — the first reasoning OCR LLM (RAG-ready Markdown output). These low-hallucination open-source models outclass frontier LLMs like GPT-5 and Gemini 2.5 while being orders of magnitude smaller, enabling private usage. I will demonstrate the abilities of these LLMs, show how to use them at scale, and discuss what’s coming next in information extraction. |
Outclassing Frontier LLMs at Extracting Information
|
|
Outclassing Frontier LLMs at Extracting Information
2025-12-22 · 11:55
Etienne Bernard
– Co-founder & CEO
@ NuMind
In this talk, the speaker presents NuExtract, the first LLM specialized in extracting structured information (JSON output), and NuMarkdown, the first reasoning OCR LLM (RAG-ready Markdown output). The talk demonstrates low-hallucination open-source models that outclass frontier LLMs like GPT-5 and Gemini 2.5 while being orders of magnitude smaller, enabling private usage. It will demonstrate the abilities of these LLMs, show how to use them at scale, and discuss what’s coming next in information extraction. |
Outclassing Frontier LLMs at Extracting Information
|
|
[VIRTUAL] WomenTechmakers: AI, Gemini, Coffee and Code
2025-12-20 · 10:00
Let's grab a coffee and code together! In today's session, we will
This is a hybrid event. Join the event virtually at https://gdg.community.dev/events/details/google-gdg-cloud-london-presents-virtual-womentechmakers-ai-gemini-coffee-and-code/ or in person at ibis London Excel Docklands - 9 Western Gateway Greater London, E16 1AB Agenda Hosts Amanda Cavallaro - Vonage (Google Developers Expert, Firebase, AI and Cloud) Amanda Cavallaro is an Italo-Brazilian developer advocate @ Vonage. She is a Google Developers Expert for the Firebase, and ML Cloud Conversational AI categories, who is passionate about cloud technologies, JavaScript, human-computer interactions and ambient computing and a love of learning. You can speak to her in Portuguese, English, Italian and a little Japanese. Saverio Terracciano - GDG Cloud London (AI/ML/Cloud GDE | GDG Organiser) Partner Vonage (https://developer.vonage.com/) Integrate SMS, voice, video, and two-factor authentication to your apps with Vonage communication APIs. Complete your event RSVP here: https://gdg.community.dev/events/details/google-gdg-cloud-london-presents-virtual-womentechmakers-ai-gemini-coffee-and-code/. |
[VIRTUAL] WomenTechmakers: AI, Gemini, Coffee and Code
|
|
Hands-On LLM Engineering with Python (Part 1)
2025-12-18 · 18:00
REGISTER BELOW FOR MORE AVAILABLE DATES! ↓↓↓↓↓ https://luma.com/stelios ----------------------------------------------------------------------------------- Who is this for? Students, developers, and anyone interested in using Large Language Models (LLMs) to build real software solutions with ** Python. Tired of vibe coding with AI tools? Want to actually understand and own your code, instead of relying on black-box magic? This session shows you how to build LLM systems properly, with full control and clear engineering principles. Who is leading the session? The session is led by Dr. Stelios Sotiriadis, CEO of Warestack, Associate Professor and MSc Programme Director at Birkbeck, University of London, specialising in cloud computing, distributed systems, and AI engineering. Stelios holds a PhD from the University of Derby, completed a postdoctoral fellowship at the University of Toronto, and has worked on industry and research projects with Huawei, IBM, Autodesk, and multiple startups. Since moving to London in 2018, he has been teaching at Birkbeck. In 2021, he founded Warestack, building software for startups around the world. What we’ll cover? A hands-on introduction to building software with LLMs using Python, Ollama, and LiteLLM, including:
This session focuses on theory, fundamentals and real code you can re-use. Why LiteLLM? LiteLLM gives you low-level control to build custom LLM solutions your own way, without a heavy framework like LangChain, so you understand how everything works and design your own architecture. A dedicated LangChain session will follow for those who want to go further. What are the requirements? Bring a laptop with Python installed (Windows, macOS, or Linux), along with Visual Studio Code or a similar IDE, with at least 10GB of free disk space and 8GB of RAM.
What is the format? A 3-hour live session with:
This is a highly practical, hands-on class focused on code and building working LLM systems. What are the prerequisites? A good understanding of programming with Python is required (basic to intermediate level). I assume you are already comfortable writing Python scripts. What comes after? Participants will receive an optional mini capstone project with one-to-one personalised feedback. Is it just one session? This is the first session in a new sequence on applied AI, covering agents, RAG systems, vector databases, and production-ready LLM workflows. Later sessions will dive deeper into topics such as embeddings with deep neural networks, LangChain, advanced retrieval, and multi-agent architectures.
How many participants? To keep this interactive, only 15 spots are available. Please register as soon as possible. |
Hands-On LLM Engineering with Python (Part 1)
|
|
🚀 Maîtriser l'Évaluation des LLM et Workflows dans n8n
2025-12-16 · 18:00
Au-delà du prompt : évaluer correctement vos workflows IA et sorties LLM 🎯 Objectif du MeetupVos systèmes IA produisent-ils des résultats fiables… ou seulement plausibles ? Lors de cette session, nous explorerons de manière concrète :
Que vous travailliez sur des agents, des chatbots, des systèmes RAG ou des flux d’automatisation, vous repartirez avec des outils clés pour mesurer, suivre et améliorer la qualité de vos outputs IA. 👥 Pour qui ?
🔧 Ce que vous emporterez
📍 Rendez-vous mardi prochain pour 2h d’atelier pratique. Venez nombreux ! |
🚀 Maîtriser l'Évaluation des LLM et Workflows dans n8n
|
|
AWS User Group Berlin - re:Invent re:cap
2025-12-16 · 17:30
Dear Community, it's our favorite time of the year. The year is coming to an end, snow drops are falling, Glühwein and hot chocolate in the hand. Yet the best of it all: AWS re:Invent is coming closer with tons of exciting announcements and inspiring sessions taking place in Las Vegas! Following the tradition, we will provide you with a great re:cap of the event, with all important updates, things you need to know to keep up with the latest AWS developments. We are very lucky to have two great sponsors for this event; Delivery Hero and Cast AI and we have organized a great lineup of speakers for you: ➡ Aaron Walker - Technology Director @base2Services ➡ Mauro Cherchi - Freelance Software Engineer @ MC ➡ Ramy Chamseddin - Cloud Solution Architect @ Capgemini We are very excited to be receiving a summary of the major re:invent announcements and updates from these three expert speakers. 18:30 - Warming up and networking chat 19:00-19:45 - re:Invent re:cap part 1 19:45-20:00 - Break with foods & drinks 20:00-20:45 - re:Invent re:cap part 2 20:45-21:00 - Open Q&A, Discussion Round For those who are interested to learn and discuss more, we may extend the time of the event for deeper discussions on important announcements. The event is taking place at awesome Delivery Hero Offices, in the heart of Berlin. We're thankful for their hospitality and sponsorship along with Cast AI! And we look forward seeing you with us on the day! |
AWS User Group Berlin - re:Invent re:cap
|
|
Planning 2026 with Google Gemini: AI Insights to Power Your Strategy
2025-12-16 · 17:00
Click on the LINK to Complete your registration. Plan smarter for 2026 with Google Gemini. In this session, we’ll show how leaders and teams can turn company data into AI-powered insights, scenario plans, and decision briefs, safely and at speed. See practical workflows for research, forecasting, and portfolio planning, plus guardrails, ROI tracking, and next steps to pilot inside your org. |
Planning 2026 with Google Gemini: AI Insights to Power Your Strategy
|
|
Planning 2026 with Google Gemini: AI Insights to Power Your Strategy
2025-12-16 · 17:00
Click on the LINK to Complete your registration. Plan smarter for 2026 with Google Gemini. In this session, we’ll show how leaders and teams can turn company data into AI-powered insights, scenario plans, and decision briefs, safely and at speed. See practical workflows for research, forecasting, and portfolio planning, plus guardrails, ROI tracking, and next steps to pilot inside your org. |
Planning 2026 with Google Gemini: AI Insights to Power Your Strategy
|
|
Google AI Deep Dive Series (Virtual) - Session 3
2025-12-13 · 10:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our global community, please double check your local time. Can't make it live? Register anyway! We'll send you a recording of the webinar after the event. Description: The AI Deep Dive Series is a hands-on virtual initiative designed to empower developers to architect the next generation of Agentic AI. Moving beyond basic prompting, this series guides you through the complete engineering lifecycle using Google’s advanced stack. You will master the transition from local Gemini CLI environments to building intelligent agents with the Agent Development Kit (ADK) and Model Context Protocol (MCP), culminating in the deployment of secure, collaborative Agent-to-Agent (A2A) ecosystems on Google Cloud Run. Join us to build AI systems that can truly reason, act, and scale. All Sessions: Dec 4th, Dec 11th, Dec 13th, Dec 18th and Dec 20th. Session 3 (Dec 13th) - Building AI Agents with ADK - Empowering with Tools Speaker: Arun KG (Staff Customer Engineer, GenAI, Google) Abstract: This second codelab in the "Building AI Agents with ADK" series focuses on empowering your agent with tools. You'll learn to add custom Python functions as tools, connect to real-time information using built-in tools like Google Search, and integrate tools from third-party frameworks like LangChain. All attendees will get $5 cloud credits |
Google AI Deep Dive Series (Virtual) - Session 3
|
|
Google AI Deep Dive Series (Virtual) - Session 3
2025-12-13 · 10:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our global community, please double check your local time. Can't make it live? Register anyway! We'll send you a recording of the webinar after the event. Description: The AI Deep Dive Series is a hands-on virtual initiative designed to empower developers to architect the next generation of Agentic AI. Moving beyond basic prompting, this series guides you through the complete engineering lifecycle using Google’s advanced stack. You will master the transition from local Gemini CLI environments to building intelligent agents with the Agent Development Kit (ADK) and Model Context Protocol (MCP), culminating in the deployment of secure, collaborative Agent-to-Agent (A2A) ecosystems on Google Cloud Run. Join us to build AI systems that can truly reason, act, and scale. All Sessions: Dec 4th, Dec 11th, Dec 13th, Dec 18th and Dec 20th. Session 3 (Dec 13th) - Building AI Agents with ADK - Empowering with Tools Speaker: Arun KG (Staff Customer Engineer, GenAI, Google) Abstract: This second codelab in the "Building AI Agents with ADK" series focuses on empowering your agent with tools. You'll learn to add custom Python functions as tools, connect to real-time information using built-in tools like Google Search, and integrate tools from third-party frameworks like LangChain. All attendees will get $5 cloud credits |
Google AI Deep Dive Series (Virtual) - Session 3
|
|
Google AI Deep Dive Series (Virtual) - Session 3
2025-12-13 · 10:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our global community, please double check your local time. Can't make it live? Register anyway! We'll send you a recording of the webinar after the event. Description: The AI Deep Dive Series is a hands-on virtual initiative designed to empower developers to architect the next generation of Agentic AI. Moving beyond basic prompting, this series guides you through the complete engineering lifecycle using Google’s advanced stack. You will master the transition from local Gemini CLI environments to building intelligent agents with the Agent Development Kit (ADK) and Model Context Protocol (MCP), culminating in the deployment of secure, collaborative Agent-to-Agent (A2A) ecosystems on Google Cloud Run. Join us to build AI systems that can truly reason, act, and scale. All Sessions: Dec 4th, Dec 11th, Dec 13th, Dec 18th and Dec 20th. Session 3 (Dec 13th) - Building AI Agents with ADK - Empowering with Tools Speaker: Arun KG (Staff Customer Engineer, GenAI, Google) Abstract: This second codelab in the "Building AI Agents with ADK" series focuses on empowering your agent with tools. You'll learn to add custom Python functions as tools, connect to real-time information using built-in tools like Google Search, and integrate tools from third-party frameworks like LangChain. All attendees will get $5 cloud credits |
Google AI Deep Dive Series (Virtual) - Session 3
|
|
Google AI Deep Dive Series (Virtual) - Session 3
2025-12-13 · 10:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our global community, please double check your local time. Can't make it live? Register anyway! We'll send you a recording of the webinar after the event. Description: The AI Deep Dive Series is a hands-on virtual initiative designed to empower developers to architect the next generation of Agentic AI. Moving beyond basic prompting, this series guides you through the complete engineering lifecycle using Google’s advanced stack. You will master the transition from local Gemini CLI environments to building intelligent agents with the Agent Development Kit (ADK) and Model Context Protocol (MCP), culminating in the deployment of secure, collaborative Agent-to-Agent (A2A) ecosystems on Google Cloud Run. Join us to build AI systems that can truly reason, act, and scale. All Sessions: Dec 4th, Dec 11th, Dec 13th, Dec 18th and Dec 20th. Session 3 (Dec 13th) - Building AI Agents with ADK - Empowering with Tools Speaker: Arun KG (Staff Customer Engineer, GenAI, Google) Abstract: This second codelab in the "Building AI Agents with ADK" series focuses on empowering your agent with tools. You'll learn to add custom Python functions as tools, connect to real-time information using built-in tools like Google Search, and integrate tools from third-party frameworks like LangChain. All attendees will get $5 cloud credits |
Google AI Deep Dive Series (Virtual) - Session 3
|
|
Google AI Deep Dive Series (Virtual) - Session 3
2025-12-13 · 10:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our global community, please double check your local time. Can't make it live? Register anyway! We'll send you a recording of the webinar after the event. Description: The AI Deep Dive Series is a hands-on virtual initiative designed to empower developers to architect the next generation of Agentic AI. Moving beyond basic prompting, this series guides you through the complete engineering lifecycle using Google’s advanced stack. You will master the transition from local Gemini CLI environments to building intelligent agents with the Agent Development Kit (ADK) and Model Context Protocol (MCP), culminating in the deployment of secure, collaborative Agent-to-Agent (A2A) ecosystems on Google Cloud Run. Join us to build AI systems that can truly reason, act, and scale. All Sessions: Dec 4th, Dec 11th, Dec 13th, Dec 18th and Dec 20th. Session 3 (Dec 13th) - Building AI Agents with ADK - Empowering with Tools Speaker: Arun KG (Staff Customer Engineer, GenAI, Google) Abstract: This second codelab in the "Building AI Agents with ADK" series focuses on empowering your agent with tools. You'll learn to add custom Python functions as tools, connect to real-time information using built-in tools like Google Search, and integrate tools from third-party frameworks like LangChain. All attendees will get $5 cloud credits |
Google AI Deep Dive Series (Virtual) - Session 3
|