Hidden Risks: Artificial Intelligence and Hermeneutic Harm

Andrew Rebera, Lode Lauwaert, Ann-Katrien Oimann

Research output: Contribution to journalArticlepeer-review

Abstract

The AI Ethics literature has identified many forms of harm caused, perpetuated or exacerbated by artificial intelligence (AI). One, however, has been overlooked. In this paper we argue that the increasing use of AI heightens the risk of ‘hermeneutic harm’, which occurs when people are unable to make sense of, or come to terms with, unexpected, unwelcome, or harmful events they experience. We develop several examples to support our argument that AI increases the risk of hermeneutic harm. Importantly, our argument makes no assumption of flawed design, biased training data, or misuse: hermeneutic harm can occur regardless. Explainable AI (XAI) could plausibly reduce the risk of hermeneutic harm in some cases. Thus, one respect in which this paper advances the field is that it shows the need to further broaden XAI’s understanding of the social function of explanation. Yet XAI cannot fully mitigate the risk of hermeneutic harm, which (as our choice of examples shows) would persist even if all ‘black-box’ problems of system opacity were to be solved. The paper thus highlights an important but underexplored risk posed by AI systems.
Original languageEnglish
Article number33
JournalMinds and Machines
Volume35
Publication statusPublished - 2025

Fingerprint

Dive into the research topics of 'Hidden Risks: Artificial Intelligence and Hermeneutic Harm'. Together they form a unique fingerprint.

Cite this