As AI models become more powerful, the companies building them are facing more powerful adversaries. As AI approaches human level, we expect various risks, but it would be particularly bad if malicious actors got their hands on unprotected versions of extremely intelligent models. To prevent that, AI companies in the future will need to be secured against the strongest adversaries, which global policy think tank RAND refers to as Security Level 5 (SL5) adversaries. The SL5 Task Force team is developing plans and prototypes for how to achieve this level of security, under the assumption that we don’t have time to wait for financial incentives to align. Berlin-based AI researcher and aisafety.berlin organiser Guy will share some of his work in the Task Force and answer questions.
talk-data.com
G
Speaker
Guy
1
talks
AI researcher
aisafety.berlin
Berlin-based AI researcher and aisafety.berlin organiser
Bio from: SL5 Task Force: Securing superintelligent AI models against powerful adversaries
Filtering by:
SL5 Task Force: Securing superintelligent AI models against powerful adversaries
×
Filter by Event / Source
Talks & appearances
Showing 1 of 1 activities