Skip to main content

Advertisement

Log in

The future of war: could lethal autonomous weapons make conflict more ethical?

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. ‘Autonomous’ in this regard refers to a system being pre-programmed to function independent of human control or supervision and does not presuppose autonomy as a construct of consciousness like that considered possessed by human agents.

  2. Although some issues within the command and control infrastructure can arise from such an abdication of strategic targeting to LAWS, the philosophical issues at play in the paper remain unaffected given the approach taken. Technical and legislative issue to address this must obviously take precedence when aiming to resolve these issue. For a more in depth discussion of these issue see Roff 2014.

  3. Michal Klincewicz (2015) provides a uniquely thorough account of the psychological differentiation between autonomous weapons systems and humans.

  4. ‘Ethical’ in this context, and throughout the paper should be used in a pragmatic way, such that an ethical LAW is one that functions in accordance with the LoW and RoE. As the paper argues, abiding by these guidelines provide an initial step that can ameliorate unnecessary violence.

  5. Value-laden programming here refers to the explicit programing of values into a system. This does not discount the fact that the design of technology always implicated some values, usually the designers and engineers that makes certain decisions rather than others during the design process.

References

Bib

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven Umbrello.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Umbrello, S., Torres, P. & De Bellis, A.F. The future of war: could lethal autonomous weapons make conflict more ethical?. AI & Soc 35, 273–282 (2020). https://doi.org/10.1007/s00146-019-00879-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00879-x

Keywords

Navigation