talk-data.com talk-data.com

Filter by Source

Select conferences and events

Showing 4 results

Activities & events

Title & Speakers Event
Katarina Slama – PhD, Research Scientist @ UK AI Security Institute

AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration.

I will discuss how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science, currently used in immediate harms research, can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms. We have a shared goal of building a better, safer future for everyone. Let's work together!

Keep Learning and Building! Accelerate your professional development with hands-on training, talks, workshops, networking events, 10+ tracks, and more at ODSC West AI Training Conference (San Francisco and virtual). More here - https://odsc.ai/

ai safety human behavioral science existential risk catastrophic risk uk aisi inspect
Virtual Keynote Talk "AI Safety: Near and Far"
Katarina Slama – PhD, Research Scientist @ UK AI Security Institute

AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration.

I will discuss how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science, currently used in immediate harms research, can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms. We have a shared goal of building a better, safer future for everyone. Let's work together!

AI/ML
Virtual Keynote Talk "AI Safety: Near and Far"
Katarina Slama – PhD, Research Scientist @ UK AI Security Institute

Date: 2025-10-28. AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, the speaker argues that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work unifies risk pathways between the two fields and suggests concrete synergies and opportunities for future collaboration. The talk discusses how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms.

AI/ML
Virtual Keynote Talk "AI Safety: Near and Far"
Katarina Slama, PhD – Research Scientist @ UK AI Security Institute

AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration. I will discuss how shared research and monitoring infrastructure, such as UK AISI Inspect, can benefit both areas; how methodological approaches from human behavioral science, currently used in immediate harms research, can be ported into AI behavioral science applied to existential risk research; and how technical solutions from catastrophic risk research can be applied to mitigate immediate societal harms. We have a shared goal of building a better, safer future for everyone. Let's work together!

AI/ML
Virtual Keynote Talk "AI Safety: Near and Far"
Showing 4 results