TY - JOUR
T1 - Hidden Risks: Artificial Intelligence and Hermeneutic Harm
AU - Rebera, Andrew
AU - Lauwaert, Lode
AU - Oimann, Ann-Katrien
PY - 2025
Y1 - 2025
N2 - The AI Ethics literature has identified many forms of harm caused, perpetuated or exacerbated by artificial intelligence (AI). One, however, has been overlooked. In this paper we argue that the increasing use of AI heightens the risk of ‘hermeneutic harm’, which occurs when people are unable to make sense of, or come to terms with, unexpected, unwelcome, or harmful events they experience. We develop several examples to support our argument that AI increases the risk of hermeneutic harm. Importantly, our argument makes no assumption of flawed design, biased training data, or misuse: hermeneutic harm can occur regardless. Explainable AI (XAI) could plausibly reduce the risk of hermeneutic harm in some cases. Thus, one respect in which this paper advances the field is that it shows the need to further broaden XAI’s understanding of the social function of explanation. Yet XAI cannot fully mitigate the risk of hermeneutic harm, which (as our choice of examples shows) would persist even if all ‘black-box’ problems of system opacity were to be solved. The paper thus highlights an important but underexplored risk posed by AI systems.
AB - The AI Ethics literature has identified many forms of harm caused, perpetuated or exacerbated by artificial intelligence (AI). One, however, has been overlooked. In this paper we argue that the increasing use of AI heightens the risk of ‘hermeneutic harm’, which occurs when people are unable to make sense of, or come to terms with, unexpected, unwelcome, or harmful events they experience. We develop several examples to support our argument that AI increases the risk of hermeneutic harm. Importantly, our argument makes no assumption of flawed design, biased training data, or misuse: hermeneutic harm can occur regardless. Explainable AI (XAI) could plausibly reduce the risk of hermeneutic harm in some cases. Thus, one respect in which this paper advances the field is that it shows the need to further broaden XAI’s understanding of the social function of explanation. Yet XAI cannot fully mitigate the risk of hermeneutic harm, which (as our choice of examples shows) would persist even if all ‘black-box’ problems of system opacity were to be solved. The paper thus highlights an important but underexplored risk posed by AI systems.
UR - https://link.springer.com/article/10.1007/s11023-025-09733-0
M3 - Article
SN - 1572-8641
VL - 35
JO - Minds and Machines
JF - Minds and Machines
M1 - 33
ER -