In this session, we’ll take a closer look at the security risks that come with integrating LLMs into applications. LLMs can be powerful allies in cybersecurity — helping with detection, testing, and protection — but they can just as easily be exploited for attacks. We’ll explore key threats such as prompt injection, jailbreaking, and agent-specific vulnerabilities, and discuss why they are currently seen as the most pressing risks. Finally, we’ll look at defense strategies, from prompt-level safeguards to system-wide controls, and show how best practices can make a real difference in securing AI systems.
talk-data.com
S
Speaker
Sebastian Krauß
1
talks
AI Test Developer
Validaitor
Sebastian is an AI Test Developer at Validaitor. With a background in Mechatronics and Autonomous Systems, and hands-on experience at Bosch, Fraunhofer, and in international research settings, Sebastian focuses on the intersection of AI robustness and real-world deployment. His current work involves developing methods to test AI models for vulnerabilities, adversarial risks, and secure behavior—ensuring AI systems perform reliably and ethically.
Bio from: PyData Rhein-Main I Security Risks in AI & Structured Automation with Agentic AI
Filter by Event / Source
Talks & appearances
1 activities · Newest first