TY - JOUR
T1 - Reactive Attitudes and AI-Agents – Making Sense of Responsibility and Control Gaps
AU - Rebera, Andrew
PY - 2024
Y1 - 2024
N2 - Responsibility gaps occur when autonomous machines cause harms for which nobody can be justifiably held morally responsible. The debate around responsibility gaps has focused primarily on the question of responsibility, but other approaches focus on the victims of the associated harms. In this paper I consider how the victims of ‘AI-harm’—by which I mean harms implicated in responsibility gap cases and caused by AI-agents—can make sense of what has happened to them. The reactive attitudes have an important role here. I argue that regulating our reactive attitudes—withdrawing or sustaining them as our understanding of the actions, motives, status, and character of others develops—is an important part of the way in which we make sense of the harms we suffer. But, I argue, AI-agents can disrupt standard practices of regulating the reactive attitudes. This can lead victims of AI-harm to suffer a secondary hermeneutic harm (in addition to the primary AI-harm). This harm, which the responsibility gap and AI ethics literatures have neglected, can occur whether responsibility gaps are real or not.
AB - Responsibility gaps occur when autonomous machines cause harms for which nobody can be justifiably held morally responsible. The debate around responsibility gaps has focused primarily on the question of responsibility, but other approaches focus on the victims of the associated harms. In this paper I consider how the victims of ‘AI-harm’—by which I mean harms implicated in responsibility gap cases and caused by AI-agents—can make sense of what has happened to them. The reactive attitudes have an important role here. I argue that regulating our reactive attitudes—withdrawing or sustaining them as our understanding of the actions, motives, status, and character of others develops—is an important part of the way in which we make sense of the harms we suffer. But, I argue, AI-agents can disrupt standard practices of regulating the reactive attitudes. This can lead victims of AI-harm to suffer a secondary hermeneutic harm (in addition to the primary AI-harm). This harm, which the responsibility gap and AI ethics literatures have neglected, can occur whether responsibility gaps are real or not.
U2 - 10.1007/s13347-024-00808-x
DO - 10.1007/s13347-024-00808-x
M3 - Article
SN - 2210-5433
VL - 37
JO - Philosophy and Technology
JF - Philosophy and Technology
ER -