talk-data.com
People (63 results)
See all 63 →Activities & events
| Title & Speakers | Event |
|---|---|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
AI Webinar Series - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series - Evaluating AI Agent Reliability
|
|
AI Webinar Series - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series - Evaluating AI Agent Reliability
|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
2026-01-21 · 18:00
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording. Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world. Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior
You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics More upcoming sessions:
Local and Global AI Community on Discord Join us on discord for local and global AI tech community:
|
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability
|
|
Technical Workshop: Observability Without Oversharing: Privacy-Conscious Telemetry for LLMs
2025-08-28 · 06:00
Workshop on privacy-conscious telemetry for LLMs. |
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
|
Docling: Get Your Documents Ready for Gen AI
2025-08-28 · 06:00
Talk on preparing documents for Gen AI. |
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
Who Let the Bots Out? A Guide to Evaluating AI Agents
2025-08-28 · 06:00
Josh Reini
– Developer Advocate
@ Snowflake
AI/ML
|
|
|
Streamlining AI Pipelines With Elyra: From Development To Inference With KServe & VLLM
2025-08-28 · 06:00
Talk about Elyra, KServe and VLLM. |
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
|
Who Let the Bots Out? A Guide to Evaluating AI Agents
2025-08-28 · 06:00
Josh Reini
– Developer Advocate
@ Snowflake
Talk about evaluating AI agents. |
AI_dev: Open Source GenAI Summit - Amsterdam
|
|
Workshop: Designing, deploying, and evaluating multi-agent systems using Snowflake Cortex
2025-07-30 · 17:05
Josh Reini
– Developer Advocate
@ Snowflake
As enterprise AI adoption accelerates, data agents that can plan, retrieve, reason, and act across structured and unstructured sources are becoming foundational. But building agents that work is no longer enough, you need to build agents you can trust. This 60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search. Then we’ll go beyond functionality and focus on reliability. You’ll learn how to instrument your agent with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically. Using trace-based observability tools like TruLens and Cortex eval APIs, we’ll show how to identify inefficiencies and refine agent behavior iteratively. By the end of this workshop, you’ll: - Build a data agent capable of answering complex queries across multiple data sources - Integrate inline evaluation to guide and assess agent behavior in real time - Debug and optimize execution flows using trace-level observability - Leave with a repeatable framework for deploying trustworthy agentic systems in production |
WEBINAR "Building Reliable Multi-Agent Systems in the Enterprise"
|
|
Design, deploy, and evaluate multi-agent systems using Snowflake Cortex
2025-07-30 · 17:05
Josh Reini
– Developer Advocate
@ Snowflake
This 60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search. The session also focuses on reliability, instrumenting agents with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically using trace-based observability tools like TruLens and Cortex eval APIs. By the end of the workshop, you’ll have a repeatable framework for deploying trustworthy agentic systems in production and be able to answer complex queries across multiple data sources. |
WEBINAR "Building Reliable Multi-Agent Systems in the Enterprise"
|
|
Design, Deploy, and Evaluate Multi-Agent Systems with Snowflake Cortex
2025-07-30 · 17:05
Josh Reini
– Developer Advocate
@ Snowflake
This 60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search. Then we’ll go beyond functionality and focus on reliability. You’ll learn how to instrument your agent with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically. Using trace-based observability tools like TruLens and Cortex eval APIs, we’ll show how to identify inefficiencies and refine agent behavior iteratively. |
WEBINAR "Building Reliable Multi-Agent Systems in the Enterprise"
|
|
Workshop: Designing, deploying, and evaluating multi-agent systems with Snowflake Cortex
2025-07-30 · 17:05
Josh Reini
– Developer Advocate
@ Snowflake
60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search. Then we’ll focus on reliability, instrument your agent with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically using trace-based observability tools like TruLens and Cortex eval APIs to identify inefficiencies and refine agent behavior iteratively. By the end of this workshop, you’ll have a repeatable framework for deploying trustworthy agentic systems in production. |
WEBINAR "Building Reliable Multi-Agent Systems in the Enterprise"
|