Command responsibility in military AI contexts: balancing theory and practicality

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial intelligence (AI) has found extensive applications to varying degrees across diverse domains, including the possibility
of using it within military contexts for making decisions that can have moral consequences. A recurring challenge in
this area concerns the allocation of moral responsibility in the case of negative AI-induced outcomes. Some scholars posit
the existence of an insurmountable “responsibility gap”, wherein neither the AI system nor the human agents involved can
or should be held responsible. Conversely, other scholars dispute the presence of such gaps or propose potential solutions.
One solution that frequently emerges in the literature on AI ethics is the concept of command responsibility, wherein
human agents may be held responsible because they perform a supervisory role over the (subordinate) AI. In the article
we examine the compatibility of command responsibility in light of recent empirical studies and psychological evidence,
aiming to anchor discussions in empirical realities rather than relying exclusively on normative arguments. Our argument
can be succinctly summarized as follows: (1) while the theoretical foundation of command responsibility appears robust
(2) its practical implementation raises significant concerns, (3) yet these concerns alone should not entirely preclude its
application (4) they underscore the importance of considering and integrating empirical evidence into ethical discussions.
Original languageEnglish
JournalAI and Ethics
DOIs
Publication statusPublished - 22 Jun 2024

Keywords

  • Command responsibility
  • Responsibility gap
  • Artificial intelligence
  • AI ethics
  • psychology

Fingerprint

Dive into the research topics of 'Command responsibility in military AI contexts: balancing theory and practicality'. Together they form a unique fingerprint.

Cite this