Abstract: We will navigate through the alignment challenges and safety considerations of LLMs, addressing both their limitations and capabilities, particularly focusing on techniques related to instruction prefix tuning and their theoretical limitations toward alignment. Additionally, I will discuss fairness across languages in common tokenizers used in LLMs. Finally, I will address safety considerations for agentic systems, illustrating methods to compromise their safety by exploiting seemingly minor changes, such as altering the desktop background to generate a chain of sequenced harmful actions. I will also explore the transferability of these vulnerabilities across different agents.
talk-data.com
Topic
large language models (llms)
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
Top Events
Beyond Boundaries: AI, GenAI, and LLMs Insights
2
Retrieval, search and knowledge in the age of LLM and Vector Databases
2
London Reactor Meetup I Microsoft Cloud meets AI
1
Neo4j Live: HybridAGI – Graph-Powered, Self-Programmable AI
1
London Seminar: Beyond LLM: GenAI for Trading and Asset Management
1
GenAI for SW developers - V.2! 1# -Measuring Mastery Assessing Large Language Mo
1
Quality Engineering meetup #9
1
PyData Leeds: Leeds Digital Fest '25
1
AI Meetup (November): GenAI LLMs and Agents
1
#21 AI Series: University of Oxford - Dr. A. Bibi
1
Advanced RAG Chatbot Assistant for Healthcare Patient Records
1
AI Seminars (Virtual): Self-Improvement with LLMs by Google DeepMind
1
Filtering by:
#21 AI Series: University of Oxford - Dr. A. Bibi
×