talk-data.com talk-data.com

Big Data LDN/Paris Face To Face 2025-10-02 at 12:30

Securing AI agents through continuous Red Teaming

Description

Prevent hallucinations and vulnerabilities in LLM agents Learn how continuous Red Teaming can protect your LLM agents from emerging threats like hallucinations and data leakage. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents.