Abstract
Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare.
Similar content being viewed by others
Notes
‘Autonomous’ in this regard refers to a system being pre-programmed to function independent of human control or supervision and does not presuppose autonomy as a construct of consciousness like that considered possessed by human agents.
Although some issues within the command and control infrastructure can arise from such an abdication of strategic targeting to LAWS, the philosophical issues at play in the paper remain unaffected given the approach taken. Technical and legislative issue to address this must obviously take precedence when aiming to resolve these issue. For a more in depth discussion of these issue see Roff 2014.
Michal Klincewicz (2015) provides a uniquely thorough account of the psychological differentiation between autonomous weapons systems and humans.
‘Ethical’ in this context, and throughout the paper should be used in a pragmatic way, such that an ethical LAW is one that functions in accordance with the LoW and RoE. As the paper argues, abiding by these guidelines provide an initial step that can ameliorate unnecessary violence.
Value-laden programming here refers to the explicit programing of values into a system. This does not discount the fact that the design of technology always implicated some values, usually the designers and engineers that makes certain decisions rather than others during the design process.
References
Arkin R (2015) The case for banning killer robots. Commun ACM 58(12):46–47. https://doi.org/10.1145/2835965
Arkin R, Ronald C (2008) Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/reactive Robot Architecture Part I: Motivation and Philosophy. In: Proceedings of the 3rd International Conference on Human Robot Interaction—HRI’08, 121. New York, New York, USA: ACM Press. https://doi.org/10.1145/1349822.1349839
Asaro PM (2008) How just could a robot war be?. In: Proceedings of the 2008 Conference on Current Issues in Computing and Philosophy, 50–64. Amsterdam, The Netherlands, The Netherlands: IOS Press. http://dl.acm.org/citation.cfm?id=1566234.1566243. Accessed 28 Jan 2017
Barrat J (2013) Our final invention. Thomas Dunne Books, New York
Baum SD (2015) Winter-safe deterrence: the risk of nuclear winter and its challenge to deterrence. Contemp Secur Policy 36(1):123–148. https://doi.org/10.1080/13523260.2015.1012346. (Taylor & Francis)
Boisboissel G (2015) Uses of lethal autonomous weapon systems. In International Conference on Military Technologies (ICMT) 2015, 1–6. IEEE. https://doi.org/10.1109/MILTECHS.2015.7153656
Bourget D, Chalmers D (2013) What do philosophers believe?. Philos Stud 170(3):465–500
Burke KA, Tal Oron-Gilad G, Conway, Peter AH (2007) Friend/foe identification and shooting performance: effects of prior task loading and time pressure. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51 (4). SAGE Publications Inc: 156–60. https://doi.org/10.1177/154193120705100403
Chase C (2015) Surviving AI: the promise and peril of artificial intelligence. Three Cs, London
Danielson P (1999) Evolutionary models of cooperative mechanisms: artificial morality and genetic programming. In: P Danielson (ed) Modeling rationality, morality, and evolution, 423–62. Oxford University Press. https://global.oup.com/academic/product/modeling-rationality-morality-and-evolution-9780195125498?cc=ca&lang=en&. Accessed 28 Jan 2017
Davidson D (1982) Paradoxes of Irrationality. In Problems of rationality, 189–198. Oxford University Press. https://academiaanalitica.files.wordpress.com/2016/10/donald-davidson-problems-of-rationality-2004.pdf. Accessed 28 Jan 2017
DeBaets AM (2014) Can a robot pursue the good? Exploring artificial moral agency. J Evol Technol 24(3):76–86. http://jetpress.org/v24.3/DeBaets.htm. Accessed 28 Jan 2017
Egeland K (2016) Lethal autonomous weapon systems under international humanitarian law. Nord J Int Law 85(2):89–118. https://doi.org/10.1163/15718107-08502001
Ekelhof M, Struyk M (2014) Deadly Decisions: 8 objections to killer robots—Google Books. https://books.google.ca/books/about/Deadly_Decisions.html?id=66UXrgEACAAJ&redir_esc=y. Accessed 28 Jan 2017
Geibel A (1997) Learning from their mistakes: Russia’s arena active protection system. http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA323300. Accessed 28 Jan 2017
Goertzel B (2016) Infusing advanced AGIs with human-like value systems: two theses. J Evol Technol 26(1):50–72
Guetlein MA (2005) Lethal autonomous weapons—ethical and doctrinal implications. Newport: Naval War College. http://www.dtic.mil/docs/citations/ADA464896. Accessed 28 Jan 2017
Guizzo E (2016) Autonomous weapons Could be developed for use within years, says Arms-Control Group. IEEE Spectrum. https://spectrum.ieee.org/automaton/robotics/military-robots/autonomous-weapons-could-be-developed-for-use-within-years. Accessed 28 Jan 2017
Heyns C (2013) Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. Human Rights Council. United Nations General Assembly. http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf. Accessed 28 Jan 2017
Higgins E (2017) Obama’s drone war escalating in his last year in office. Huffington Post. http://www.huffingtonpost.com/entry/obamas-drone-war-escalating_b_9611490.html. Accessed 28 Jan 2017
ICRAC (2009) Original mission statement. International Committee for Robot Arms Control. https://icrac.net/statements/. Accessed 28 Jan 2017
ICRAC (2014) 2014 mission statement. International Committee for Robot Arms Control. https://icrac.net/statements/. Accessed 28 Jan 2017
ICRC 2014. 1980 Convention on certain conventional weapons—factsheet. International Committee of the Red Cross. https://www.icrc.org/en/document/1980-convention-certain-conventional-weapons. Accessed 28 Jan 2017
Jacoby GA, Chang JD (2008) Towards command and control networking of cooperative autonomous robotics for military applications (CARMA). In: 2008 Canadian Conference on Electrical and Computer Engineering, 000815–20. IEEE. https://doi.org/10.1109/CCECE.2008.4564649
Jenks C (2010) Law from above: unmanned aerial systems, use of force, and the law of armed conflict. N D Law Rev 85:649. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1569904. Accessed 28 Jan 2017
Johnson AM, Axinn S (2013) The Morality of autonomous robots. J Mil Ethics 12(2):129–141. https://doi.org/10.1080/15027570.2013.818399
Jürgen A (2008) Military uses of nanotechnology—too much complexity for international security? Complexity 2(1):62–70. https://doi.org/10.1002/cplx
Kantrowitz A (1992) The weapon of openness. In: Crandal BC, Lewis J (eds) Nanotechnology research and perspectives. MIT Press, Cambridge, pp 303–311. https://www.foresight.org/Updates/Background4.html. Accessed 28 Jan 2017
Katz Y, Lappin Y (2012) Iron dome ups its interception rate to over 90%. Jerusalem Post. http://www.jpost.com/Defense/Iron-Dome-ups-its-interception-rate-to-over-90-percent. Accessed 28 Jan 2017
Kirk A (2015) What are the biggest defence budgets in the world?—Telegraph. The Telegraph. http://www.telegraph.co.uk/news/uknews/defence/11936179/What-are-the-biggest-defence-budgets-in-the-world.html. Accessed 28 Jan 2017
Klincewicz M (2015b) autonomous weapons systems, the frame problem and computer security. J Mil Ethics 14(2): 162–176. https://doi.org/10.1080/15027570.2015.1069013. (Taylor & Francis)
Krishnan A (2009) Killer robots: legality and ethicality of autonomous weapons. Ashgate. https://books.google.ca/books/about/Killer_Robots.html?id=klvlN9PgBeYC. Accessed 28 Jan 2017
Lewis J (2015) The case for regulating fully autonomous weapons. Yale L.J. 1309, January. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2528370. Accessed 28 Jan 2017
Lin P, George K, Bekey, Abney MA (2008) Autonomous military robotics: risk, ethics, and design. http://ethics.calpoly.edu/onr_report.pdf. Accessed 28 Jan 2017
Marauhn T (2014) An analysis of the potential impact of lethal autonomous weapons systems on responsibility and accountability for violations of international law. In CCW Expert Meeting on Lethal Autonomous Systems. Geneva. https://unoda-web.s3-accelerate.amazonaws.com/wp-content/uploads/assets/media/35FEA015C2466A57C1257CE4004BCA51/file/Marauhn_MX_Laws_SpeakingNotes_2014.pdf. Accessed 28 Jan 2017
Marchant GE, Braden Allenby RC, Arkin J, Borenstein ML, Gaudet O, Kittrie P, Lin GR, Lucas RO’Meara, Silberman J (2015) International governance of autonomous military robots. In: Kimon PV, George JV (eds) Handbook of unmanned aerial vehicles, Dordrecht: Springer, 2879–2910. https://doi.org/10.1007/978-90-481-9707-1_102
McLean W (2014) Drones are cheap, soldiers are not: a cost-benefit analysis of war. The Conversation. http://theconversation.com/drones-are-cheap-soldiers-are-not-a-cost-benefit-analysis-of-war-27924. Accessed 28 Jan 2017
Mills MJ, Owen B, Toon J, Lee-Taylor, Robock A (2014) Multidecadal global cooling and unprecedented ozone loss following a regional nuclear conflict. Earth’s Future 2 (4): 161–76. https://doi.org/10.1002/2013EF000205. (Wiley Periodicals, Inc.)
Nadeau JE (2006) Only androids can be ethical. In: KM Ford, CN Glymour, PJ Hayes (eds) Thinking about android epistemology, MIT Press, Boston
Nibbeling N, Raôul RD, Oudejans EM, Ubink, Hein AM, Daanen (2014) The effects of anxiety and exercise-induced fatigue on shooting accuracy and cognitive performance in infantry soldiers. Ergonomics 57(9):1366–1379. https://doi.org/10.1080/00140139.2014.924572
O’Meara RM (2011) Contemporary governance architecture regarding robotics technologies: an assessment. In: George AB, Abney K, Lin P (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 159–168. http://arteca.mit.edu/book/robot-ethics
Pinch T, Bijker WE (1987) “The Social Construction of Facts and Artifacts.” In: Bijker WE, Parke T. Hughes, Pinch T (eds) The social construction of technological systems: new directions in the sociology and history of technology, MIT Press, 405. https://books.google.ca/books?id=B_Tas3u48f8C&printsec=frontcover&dq=The+Social+Construction+of+Technological+Systems&hl=en&sa=X&ved=0ahUKEwjvtbqrkfHXAhUMMd8KHdXnDGEQ6AEIKDAA#v=onepage&q=The Social Construction of Technological Systems&f = false. Accessed 28 Jan 2017
Roff HM (2014) The strategic robot problem: lethal autonomous weapons in war. J Mil Ethics 13(3): 211–227. https://doi.org/10.1080/15027570.2014.975010. (Taylor & Francis)
Sauer F (2016) Stopping ‘killer robots’: why now is the time to ban autonomous weapons systems. Arms Control Association. https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems. Accessed 28 Jan 2017
Shachtman N (2007) Robo-Snipers, ‘Auto Kill Zones’ to Protect Israeli Border. Wired. https://www.wired.com/2007/06/for_years_and_y/. Accessed 28 Jan 2017
Sharkey NE (2010) Saying ‘No!’ to lethal autonomous targeting. J Mil Ethics 9 (4): 369–83. https://doi.org/10.1080/15027570.2010.537903. (Routledge)
Sharkey NE (2012) The evitability of autonomous robot warfare. Evitability of Autonomous Robot Warfare 886:787–799. https://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-sharkey.htm. Accessed 28 Jan 2017
Shulman HC, Jonsson, Tarleton N (2009) Machine ethics and superintelligence. In AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, 95–97. Tokyo. http://ia-cap.org/ap-cap09/proceedings.pdf. Accessed 28 Jan 2017
Singer PW (2009a) Military robots and the laws of war. The New Atlantis. http://cvrr.ucsd.edu/ece172a/fa10/papers/MilitaryRobotsSinger2009.pdf. Accessed 28 Jan 2017
Singer PW (2009b) Wired for war: the robotics revolution and conflict in the 21st Century. Ethics Int Aff 23:312–313. https://doi.org/10.1111/j.1747-7093.2009.00222_4.x
Soares N (2016) The value learning problem. In Ethics for Artificial IntelligenceWorkshop at 25th International Joint Conference on Artificial Intelligence, 1–8. Machine Intelligence Research Institute
Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. https://intelligence.org/files/CEV-MachineEthics.pdf. Accessed 28 Jan 2017
Thurnher JS (2012) Legal implications of fully autonomous targeting. Jt Force Q 67:77–84. http://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-67/JFQ-67_77-84_Thurnher.pdf. Accessed 28 Jan 2017
Thurnher J (2013) the law that applies to autonomous weapon systems. 17 ASIL INSIGHTS, no. 4(January). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2296343. Accessed 28 Jan 2017
Thurnher JS (2016) Means and methods of the future: autonomous systems BT—targeting: the challenges of modern warfare. In: Paul AL, Ducheine MN, Schmitt, Frans PB, Osinga (eds) Targeting: the challenges of modern warfare. TMC Asser Press, The Hague, pp 177–199. https://doi.org/10.1007/978-94-6265-072-5_9
United Nations (1979) Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (AP I). Vol. 1125. https://treaties.un.org/doc/publication/unts/volume 1125/volume-1125-i-17512-english.pdf. Accessed 28 Jan 2017
United SN (2017) MK 15—Phalanx Close-In Weapons System (CIWS).” The US Navy—Fact File. http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=487&ct=2. Accessed 28 Jan 2017
Varden H (2010) Kant and lying to the murderer at the door… one more time: kant’s legal philosophy and lies to murderers and nazis. J Soc Philos 41(4): 403–421. https://doi.org/10.1111/j.1467-9833.2010.01507.x. (Blackwell Publishing Inc)
Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI Soc 22(4):565–582. https://doi.org/10.1007/s00146-007-0099-0
Walzer M (1991) Just and unjust wars: a moral argument with historical illustrations. https://books.google.ca/books/about/Just_and_Unjust_Wars.html?id=EuTQCQAAQBAJ&redir_esc=y. Accessed 28 Jan 2017
Whitman J (2011) The arms control challenges of nanotechnology. Contemp Secur Policy 32 (1): 99–115. https://doi.org/10.1080/13523260.2011.556848. (Routledge)
Wilson W (2012) The Pennyfarthing H-Bomb. The World Today. https://www.academia.edu/5105513/Pennyfarthing_H-Bomb. Accessed 28 Jan 2017
Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law. Va Environ Law J 31(1):307–364. http://web.a.ebscohost.com/abstract?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=10455183&AN=101117008&h=HCtN0Qwj9FkMi0BNk3BVUMyUEAZw1qOJJEHFaYjkQwVTp%2FmyBP2O2XrD%2F3Vl6hfhLdItl91ov6ATK5PhtMHBKA%3D%3D&crl=c&resultNs=AdminWebAuth&resultLocal). Accessed 28 Jan 2017
Work B (2015) The Third US Offset Strategy and Its Implications for Partners and Allies > US DEPARTMENT OF DEFENSE.” US Department of Defense. https://www.defense.gov/News/Speeches/Speech-View/Article/606641/the-third-us-offset-strategy-and-its-implications-for-partners-and-allies/. Accessed 28 Jan 2017
Xia L, Robock A, Mills M, Stenke A, Helfand I (2015) Decadal reduction of Chinese agriculture after a regional nuclear war. Earth’s Future 3 (2): 37–48. https://doi.org/10.1002/2014EF000283. (Wiley Periodicals, Inc.)
Bib
Davis J, Nathan LP (2015) Handbook of ethics, values, and technological design: sources, theory, values and application domains. In: van den Hoven J, Vermaas PE, van de Poel I (eds) Handbook of ethics, values, and technological design: sources, theory, values and application domains, Springer, Berlin, 12–40. https://doi.org/10.1007/978-94-007-6970-0
Klincewicz M (2015a) Autonomous weapons systems, the frame problem and computer security. J Mil Ethics 14 (2): 162–76. https://doi.org/10.1080/15027570.2015.1069013. (Routlegde)
Manders-Huits N (2011) What values in design? The challenge of incorporating moral values into design. Sci Eng Ethics 17(2):271–287. https://doi.org/10.1007/s11948-010-9198-2
McClelland J (2003) The review of weapons in accordance with article 36 of additional protocol I.” https://www.icrc.org/eng/assets/files/other/irrc_850_mcclelland.pdf. Accessed 28 Jan 2017
Pereira L, Moniz, Saptawijaya A (2007) Modelling Morality with Prospective Logic BT—Progress in Artificial Intelligence: 13th Portuguese Conference on Aritficial Intelligence, EPIA 2007, Workshops: GAIW, AIASTS, ALEA, AMITA, BAOSW, BI, CMBSB, IROBOT, MASTA, STCS, and TEMA, Guimarães, Portug.” In, edited by José Neves, Manuel Filipe Santos, and José Manuel Machado, 99–111. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77002-2_9
Winner L (2003) Do artifacts have politics? Technol Future 109(1):148–164. https://doi.org/10.2307/20024652
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Umbrello, S., Torres, P. & De Bellis, A.F. The future of war: could lethal autonomous weapons make conflict more ethical?. AI & Soc 35, 273–282 (2020). https://doi.org/10.1007/s00146-019-00879-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-019-00879-x