Abstract: We will navigate through the alignment challenges and safety considerations of LLMs, addressing both their limitations and capabilities, particularly focusing on techniques related to instruction prefix tuning and their theoretical limitations toward alignment. Additionally, I will discuss fairness across languages in common tokenizers used in LLMs. Finally, I will address safety considerations for agentic systems, illustrating methods to compromise their safety by exploiting seemingly minor changes, such as altering the desktop background to generate a chain of sequenced harmful actions. I will also explore the transferability of these vulnerabilities across different agents.
talk-data.com
Topic
tokenizers
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
Top Speakers
Filtering by:
Dr. Adel Bibi
×