Command responsibility of autonomous weapons under international humanitarian law

Abstract The use of autonomous weapons is becoming one of the most significant threats to humanity in today’s society. One of the major issues confronting the use of autonomous weapons is that of command responsibility. This paper aims to look into the rules governing the operation of Autonomous Weapon System (AWS) on the battlefield in particular with regard to the command responsibility under international humanitarian law. The study also elaborates on the controversy that arose among worldwide societies regarding the weapon’s development and deployment. The study is normative-empirical research, and the research is based on legal principles and facts. It employed a descriptive-analytical method. The study reveals that the use of AWS in armed conflict is not explicitly governed by international humanitarian law. The use of AWS could potentially jeopardize several general principles of international humanitarian law, including proportionality, distinction, military necessity, and limitation. If the use of AWS results in war crimes, the commander can be held liable. However, whether the notion of command responsibility can be applied to AWS weapons classified as “Human-out-of-the-Loop Weapons” is currently being contested. This is due to the weapon system’s ability to pick and shoot the targets without the need for human input or interaction.


Introduction
In the popular Terminator film, a super robot acted by Arnold Schwarzenegger hunts down and attempts to kill a human target. In the 1980s, it was pure science fiction. Killer robots that hunt targets are now not only a reality but they are being sold and deployed on the battlefield. There are some of the leading technology companies (i.e. Amazon, Microsoft, and Intel) that are putting the world at risk through the development of killer robots (Ahmed, 2021). Furthermore, dozens of countries have echoed calls for negotiating a treaty to maintain "human control over robots", in order to ban the use of fully autonomous weapons (Sauer, 2016).

ABOUT THE AUTHORS
Yordan Gunawan is currently a PhD researcher at the Department of International Law and International Relations, Facultat de Dret, Universitat Pompeu Fabra, Barcelona, Spain. In addition, he is also a Lecturer at the Faculty of Law, Universitas Muhammadiyah Yogyakarta, Indonesia. Muhamad Haris Aulawi is a Senior Lecturer at Universitas Muhammadiyah Yogyakarta's Faculty of Law. Rizaldy Anggriawanis a Researcher at Universitas Muhammadiyah Yogyakarta's Faculty of Law. Tri Anggoro Putro is a student at the Kun Shan University in Taiwan, the Republic of China.
With the development of weapons technology, humans try to create modern weapons with the aim of destroying and immobilizing the opposing country in the shortest possible time in more effective and efficient ways. These are the ways that are currently seen as feasible using technology that focuses on artefacts that are automatic and do not require a human role to perform their duties. With what Creveld identified the chronology of weapons history into four phases, namely the age of tools, the age of machine, age of system, and age of automation (Van Creveld 2010). This embodiment of the age of automation is then known as the Autonomous Weapons System (AWS).
The use of this system has in fact been widely used in various aspects of life. However, what is interesting and worth discussion is when this technology is used as a combat force in robotic warfare. Noel E. Sharkey, a Professor of Artificial Intelligence and Robotics and Public Engagement at the University of Sheffield, stated that he has followed the development of robot technology in more than 50 countries that are currently developing AWS specifically for the benefit of armed conflict. He pointed out that, currently, the existence of Unmanned Aerial Vehicles (UAV) has become the most versatile military equipment in modern warfare, and it is highly likely that future wars that occur will involve killer robots (Jha, 2016).
Human Rights Watch (HRW) advocacy director Mary Wareham believes the use of autonomous weapons is emerging as one of the most pressing threats to humanity in today's world (Meier, 2016). She was critical of the fact that leading countries for failing to take steps to address the problem. Meanwhile, experts have issued warnings that killer robots have the potential to wipe out the human population with irresponsible attacks. The campaign group "Stop Killer Robots" is also pushing for a global agreement, calling on all countries to ban the use of killer robots. Mary Wareham lamented the fact that no progress has been made in negotiations on an agreement to ban or limit killing robots at the CCW meeting in Geneva (Wareham, 2021). In exchange, countries agreed to spend the next two years developing a normative and operational framework to address the problems of using such weapons systems (Rosert & Sauer, 2021).
The discussion on killer robots since the Campaign to Stop Robot Killer continues to be a crucial discussion and can affect the dynamics of world weaponry. The forum used to discuss the issue of killer robots is the meeting of countries at the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or Have Indiscriminate Effects (1980 Convention on Conventional Weapons (CCW); A. Sharkey, 2019). The CCW was discussed in 1980 and came into force in 1983. The CCW has been ratified by 125 nations, including the United States, the United Kingdom, France, Russia, Israel, China, and South Korea (UN Office of Disarmament Affairs, 2020).
However, this convention does not regulate autonomous weapons systems and, indeed, AWS do not fall into the categories of weapons prohibited under existing conventions, nor are autonomous weapons automatically within the category of prohibited weapons as defined by international humanitarian law. Many experts point out that a number of countries are starting to sound the alarm for weapons systems like this. At the Convention on Conventional Weapons (CCW) in mid-November 2019, UN Secretary-General Antonio Guterres called for a new international treaty to prohibit the use of killing robots for any purpose. According to him, machines that have the power and discretion to kill without human intervention are politically unacceptable and morally reprehensible (Hickleton, 2019).
Many people believe that the presence of AWS will actually be a double-edged sword, which has a positive impact because of all the sophistication this technology has to offer. On the other hand, it has a negative impact that will be very detrimental to humans (Chengeta, 2016). This is because AWS is a new issue for which until now there have been no strict regulations and restrictions regarding its development and use in an armed conflict. Therefore, there are several rules that AWS has the potential to violate. This paper aims to investigate the rules on the use of AWS in the battlefield. It explores the existing laws, treaties, and conventions, which implicitly regulate AI-based weapons under the international humanitarian perspective. It also analyzes the general principles of the prevailing international humanitarian law and then be analyzed with the current facts occurred. The study also elaborates the controversy which emerged among the international communities with regards to the development and utilization of the weapon.

Method
The study is normative-empirical research and is based on legal principles and facts. The studied facts are based on empirical and library materials or secondary data such as laws, research, journals, and the results of scientific legal works. This method is carried out by conducting an assessment of the legal norms contained in the regulations related to humanitarian law, which is then correlated with the use of the Autonomous Weapon System in an international armed conflict. The research employs a descriptive-analytical method whereby the approach used to analyze the issues of the current state of the object of research based on facts. Furthermore, the data obtained have been analyzed in accordance with the applicable legal provisions.
The method of collecting data in this study is by means of library and empirical research, specifically looking for concepts, theories, opinions, and findings that are closely related to the subject matter that is specifically pertaining to the general principle of humanitarian law which then will be connected to the use of AWS. In analyzing the data, qualitative data analysis methods were used. This entailed organizing the data, sorting it into manageable units, synthesizing, and, categorizing it with the aim of finding themes and working hypotheses that could eventually be adopted as substantive theories.

Revolution in military affairs
The concept of Revolution in Military Affairs (RMA) is a military theoretical hypothesis regarding the future of warfare which is often related to technological and organizational developments in the military (Kania, 2021). RMA is also interpreted as a qualitative direction of the characteristic patterns of war in the modern era. Revolution in military affairs arises when there is a change in (military) technology, which is then combined with organizational and operational changes, resulting in a transformation in the administration of war. Broadly stated, the RMA claims that, at certain periods in human history, there were new military doctrines, strategies, tactics, and technologies that caused (unavoidable) changes in the conduct of warfare. These changes simultaneously forced an accelerated adaptation of new doctrines and strategies (Branch-Evans, 2001).
It seems that there are difficulties in defining precisely and consistently the concept of revolution in military affairs. An example of RMA itself is the overall transformation of war, such as the development (and use) of aircraft and the atomic bomb (Lorber, 2020). Another example can be identified from the change from wooden sailing ships to armored steamboats in the late 19 th century (Walley, 2018). Changes in how the war was carried out were also driven by political upheaval, for example, leeve en masse in France which significantly increased the scale of land warfare (Schneider, 2019). The essence of the RMA is not related to the speed of change in military effectiveness, but rather to the magnitude of the changes compared to previous military capabilities. Technological development is the most easily identifiable factor of the RMA (Steff & Abbasi, 2020), but technological development alone is not sufficient to account for changes in the relative effectiveness of the military. Blitzkrieg change can only occur when new operating concepts are linked to technological changes. These two things often affect military organizations which are reflected in new conditions. History recommends three prerequisites for the full realization of RMA, namely technological development, military operational development, and organizational adaptation (Fitzsimonds & Van Tol, 1994). The industrial revolution has become a point of increasing technological development, not least in the military field. Technological developments in the context of the RMA highlight the evolution of weapons technology and information technology between countries. For example, the development of internal combustion weapons that allow the existence of self-propelled vehicles. These discoveries must of course be adapted to practical military systems. In the development and integration of tanks as military technology, for example, tanks were actually discovered long before their use in military operations or their introduction in Cambrai 1917 (Hammond, 2008). The development of military technology that has reached the technological capabilities that can determine and execute their own targets (Lethal Autonomous Weapon Systems) will also influence warfare, for example, unmanned aerial vehicles (UAVs), which combine nanotechnology, robotics, and biotechnology (Krishnan, 2016).
In order to fully exploit the potential of new technology systems, operational concepts that incorporate and integrate new technologies must be developed into a coherent doctrine. Military organizations should also practice using and enhancing them interactively. After the tank was introduced into combat, it took decades of experimentation and doctrinal development to produce the Blitzkrieg. The success of the Blitzkrieg required not only tank technology and a coherent armored warfare doctrine, but also substantial organizational and even cultural changes reflected in the new joint arms operation such as the German Panzer division (Macksey, 2018). The synergistic effect of these three prerequisites leads to RMA. Indeed, the increasing recognition of the importance of doctrinal and organizational elements has led to the term "revolution in military affairs." This is a consequence of the military-technical revolution which implies that technology is the main factor in the revolution.

The controversy of autonomous weapon system
The world is entering a new era of war, with artificial intelligence (AI) taking center stage. AI makes the military faster, smarter, and more efficient (Goldfarb & Lindsay, 2020). However, if left unchecked, then that potential will threaten world peace. In a report released by the United States National Security Commission on Artificial Intelligence it reveals a "new warfighting paradigm" that pits "algorithms against algorithms," and urges massive investment to continue to innovate against potential adversaries (Schmidt et al., 2021). In its latest five-year development plan, China also promotes AI in an effort to improve the research and development sector. The People's Liberation Army, which is under the command of the Chinese Communist Party, is also preparing for a future state of what they call "smart warfare" (Lu, 2021). As Russian President Vladimir Putin said earlier in 2017, "whoever becomes a leader in this field will become the ruler of the world" (Haner & Garcia, 2019).
At the end of 2020, worsening tensions in the Caucasus region broke out and led to war. Azerbaijan and Armenia fought over the disputed Nagorno-Karabakh region (SemercioğSemercioğlu, 2021). However, for those who were paying attention, this was a turning point in the war. Ulrike Franke, drone warfare expert at the European Council on Foreign Relations, said that a very important aspect of the conflict in Nagorno-Karabakh was the use of roaming munitions called "kamikaze drones"-a fairly autonomous system' (Modebadze, 2021). Advanced roaming ammunition models are capable of having a high degree of autonomous system. Once launched, they fly to a designated target area, where they roam, scanning for targets-usually air defense systems. Once they detect a target, they fly to it, destroying it with the explosive charge inside. It is for this reason that this model has earned the nickname "kamikaze drone".
It is predicted that AI-based technologies such as swarm drones that operate together as a unit will be used for military purposes (Johnson, 2020). Martijn Rasser of the Center for a New American Security, a Washington-based agency, argues that this type of drone can knock out air defense systems. The scale and speed of the drone swarm open up the prospect of a military clash so complex that humans cannot keep up, and further trigger the dynamics of the arms race (Vincent, 2018).
Some military experts believe that autonomous weapons systems not only provide significant strategic and tactical advantages on the battlefield, but are also morally superior to the use of human combatants. Critics, on the other hand, believe that these weapons should be limited, if not prohibited entirely, for a variety of moral and legal reasons. Those who advocate for the continued development and deployment of autonomous weapons systems generally point to a number of military benefits. First and foremost, autonomous weapons systems serve as a force multiplier. That is, fewer warfighters are required for a given mission, and each warfighter is more effective. Following that, supporters' credit autonomous weapons systems with expanding the battlefield by allowing combat to reach previously inaccessible areas. Finally, by removing human warfighters from dangerous missions, autonomous weapons systems can reduce casualties.
The Unmanned Systems Roadmap: 2007-2032 published by the Department of Defense provides additional justification for developing autonomous weapons systems. Robots, for example, are better suited than humans for "'dull, dirty, or dangerous' missions." Long-duration sorties are an example of a dull mission. A dirty mission would be one that exposes humans to potentially hazardous radiological material. Explosive ordnance disposal is an example of a dangerous mission. According to Thurnher (2012) of the United States Army, "[lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to strike lethally even when communications links have been severed." Furthermore, the long-term savings that could be realized by fielding an army of military robots have been emphasized. Francis (2013) cites Department of Defense figures in a 2013 article published in The Fiscal Times that show "each soldier in Afghanistan costs the Pentagon roughly $850,000 per year." Some estimate the annual cost to be even higher. Francis, on the other hand, claims that "the TALON robot-a small rover that can be outfitted with weapons-costs $230,000." According to Defense News, Gen. Robert Cone, former commander of the United States Army Training and Doctrine Command, suggested at the 2014 Army Aviation Symposium that by relying more on "support robots," the Army could eventually reduce the size of a brigade from 4,000 to 3,000 soldiers without sacrificing effectiveness (Ackerman, 2014).
Air Force Maj. Jason S. DeSon discusses the potential benefits of autonomous aerial weapons systems in the Air Force Law Review. According to DeSon (2015), the physical strain of high-G maneuvers, as well as the intense mental concentration and situational awareness required of fighter pilots, make them susceptible to fatigue and exhaustion; robot pilots, on the other hand, would be free of these physiological and mental constraints. Furthermore, fully autonomous planes could be programmed to perform truly random and unpredictable actions, potentially confusing an opponent. More startling, Air Force Capt. Michael Byrnes predicts that a single unmanned aerial vehicle with machine-controlled maneuvering and accuracy could take out an entire fleet of aircraft, presumably with human pilots, "with a few hundred rounds of ammunition and sufficient fuel reserves" (Byrnes, 2014) Furthermore, several military experts and roboticists have also argued that autonomous weapons systems should be considered not only morally acceptable, but also ethically preferable to human fighters. For example, roboticist Ronald C. Arkin believes that in the future, autonomous robots will be able to act more "humanely" on the battlefield for a variety of reasons, including the fact that they will not need to be programmed with a self-preservation instinct, potentially eliminating the need for a "shoot-first, ask questions later" mentality. Autonomous weapons systems' judgments will not be clouded by emotions like fear or hysteria, and the systems will be able to process far more incoming sensory information than humans without discarding or distorting it to fit preconceived notions. Finally, Arkin believes that in teams of human and robot soldiers, the robots can be relied on to report ethical infractions they observe more than a team of humans who may close ranks (Arkin, 2010).
According to US Army Lt. Col. Douglas A. Pryer, there may be ethical benefits to removing humans from high-stress combat zones in favor of robots. He cites neuroscience research indicating that when the neural circuits responsible for conscious self-control are overloaded with stress, they can shut down, leading to sexual assaults and other crimes that soldiers would otherwise be less likely to commit. Pryer, on the other hand, ignores the question of whether or not waging war with robots is ethical in the abstract. Instead, he claims that robot warfare has serious strategic disadvantages and feeds the cycle of perpetual warfare because it incites such moral outrage among the populations that the US most needs to support (Pryer & Col, 2013).
While some moral arguments are used to support autonomous weapons systems, others are used to oppose them. Others contend that moral arguments against autonomous weapons systems are flawed. At an international joint conference on artificial intelligence in July 2015, an open letter calling for the prohibition of autonomous weapons was released. "Artificial Intelligence (AI) technology has advanced to the point where the deployment of such systems is-practically if not legally-feasible within years, not decades," the letter warns, "and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms." The letter also mentions that AI has the potential to benefit humanity, but that if a military AI arms race occurs, AI's reputation may suffer, and a public backlash may limit future benefits. Elon Musk (inventor and founder of Tesla), Steve Wozniak (cofounder of Apple), physicist Stephen Hawking (University of Cambridge), and Noam Chomsky (Massachusetts Institute of Technology), among others, have signed the letter. The letter has also been signed by over three thousand AI and robotics researchers (Hawking et al., 2015). The open letter simply requests "a prohibition on offensive autonomous weapons beyond meaningful human control." It is frequently difficult to determine whether a weapon is offensive or defensive. As a result, while many people believe that an effective missile defense shield is strictly defensive, it can be extremely destabilizing if it allows one nation to launch a nuclear strike against another without fear of retaliation.
Furthermore, engineers, AI and robotics experts, and other scientists and researchers from 37 countries issued the "Scientists' Call to Ban Autonomous Lethal Robots" in 2013. The statement mentions a lack of scientific evidence that robots will have "the functionality required for accurate target identification, situational awareness, or decisions regarding the proportional use of force" in the future. As a result, they may cause significant collateral damage. The statement concludes by emphasizing that "decisions about the use of violent force must not be delegated to machines." Indeed, those who oppose autonomous weapons systems frequently express concern about the delegation of life-or-death decisions to nonhuman agents. The most visible manifestation of this concern is the ability of systems to select their own targets. Thus, N. Sharkey (2016), a highly regarded computer scientist, has called for a ban on "lethal autonomous targeting" because it violates the Principle of Distinction, regarded as one of the most important rules of armed conflict -autonomous weapons systems will struggle to determine who is a civilian and who is a combatant, which is difficult even for humans. Allowing AI to make targeting decisions will almost certainly result in civilian casualties and unacceptable collateral damage.
Another major source of concern is the issue of accountability when autonomous weapons systems are used. Ethicist Robert Sparrow emphasizes this ethical issue by stating that a fundamental requirement of international humanitarian law, or jus in bello, requires that someone be held accountable for civilian deaths. Any weapon or other means of war that makes it impossible to assign blame for the casualties it causes does not meet the requirements of jus in bello and should not be used in battle. This problem arises because AI-equipped machines make their own decisions, making it difficult to determine whether a flawed decision is the result of flaws in the program or the autonomous deliberations of the AI-equipped (so-called smart) machines. The nature of this problem was highlighted when a driverless car violated speed limits on a highway by moving too slowly, and it was unclear to whom the ticket should be issued (Sparrow, 2007). When a human being decides to use force against a target, there is a clear chain of accountability that extends from the person who "pulled the trigger" to the commander who issued the order. There is no such clarity in the case of autonomous weapons systems. It is unclear who or what is to blame or held accountable.

The command responsibility and use of AI-based weapon under humanitarian law
The United States Department of Defense identifies a "Autonomous Weapon System" as "a weapon system that, once activated, can select and engage targets without further interaction by a human operator." It includes human-supervised Autonomous Weapon Systems, which are designed to allow human operators to override the weapon system's functioning but may select and engage targets without additional human input after activation. According to Article 36 of Additional Protocol I to the Geneva Conventions, each member state must perform an evaluation of weapons to guarantee that the form and effect produced by these weapons are in line with international law. In the sphere of international law, the legality of a lethal weapon must be able to meet the standards of Jus ad Bellum, which is the principle of distinction, and Jus in Bello, which is the norm of proportionality. The principle of distinction states that weapons, techniques, and methods of warfare must be capable of distinguishing between legitimate targets (combatants, civilians who are prisoners, and military objects) and those that are not (civil society, horse de combat, civilian objects, and humans and other objects that are protected). Killer robots will be unable to discern between combatants and non-combatants conclusively, or even at all (Lauwaert, 2021). The contrast between the two statuses serves as the foundation for all regulations stated in international humanitarian law. Aside from the distinction principle, the ability of killer robots to detect the employment of proportional weapons is also called into question (Tumay, personal communication, 29 October 2021). Attacks targeting civilians are fundamentally lawful under international humanitarian law, as long as the attacks are proportional. However, the ability of killer robots to determine the proportionality of an attack on people remains a major concern for the international community, especially since there are no clear norms and constraints regarding their development and use in armed conflict.
International Humanitarian Law does not specifically regulate the use of AWS in armed conflict, so it is necessary to conduct a further legal review as to whether the use of AWS will violate the provisions of International Humanitarian Law, particularly in terms of means and methods of war and whether its use has complied with these provisions (Wilia & Christianti, 2019). There are two forms of liability in international law, that is state and individual responsibility. State responsibility will arise if there is an act of a state that violates obligations under international law, as stated in Article 1 of the Responsibility of States for Internationally Wrongful Acts: "Every internationally wrongful act of a State entails the international responsibility of that State." Regarding state actions, Article 8 of the Responsibility of States for Internationally Wrongful Acts, states: "The conduct of a person or group of persons shall be considered an act of a State under international law if the person or group of persons is in fact acting on the instructions of, or under the direction or control of, that State in carrying out the conduct." A narrower and more specific provision is stated in Article 4 which states that one of the actions that can be categorized as state action is the act of a state organ in an official capacity, which is based on national law that has the authority to act on behalf of the state. For example, the armed forces of a country. Based on these provisions, if a country deploys its armed forces to carry out attacks using AWS, and there are errors and violations of International Humanitarian Law, the country can be held accountable internationally.
In order for a state to be held responsible under international law, an act that has been realized or attributable to the state must occur, there must be a damage resulting from the act, there must be a lien of causality between the act and the damage, and there must be no reason that makes the act lawful. As a result, when these conditions are met, the state is liable for the wrongful act. Despite its autonomy, the state is objectively responsible for wrongful acts resulting from the use of these systems; however, the scope of this responsibility should be examined separately in terms of two different dimensions of war theory: jus ad bellum and jus bello.
The principle of jus ad bellum establishes the criteria for determining whether or not war is justified in specific situations. It simply refers to the time period preceding the war. The use of an autonomous weapon system during a time of jus ad bellum does not make the use of force a wrongful act in and of itself. For the use of force to become legitimate, regardless of the weapon used, UNSC submission, valid consent of the target state, or self-defense conditions must be met. Jus in bello, on the other hand, simply seeks to justify the use of force in war. Civil immunity, proportionality, and distinction are the most essential issues. During the period of jus in bello, the rules of international humanitarian law apply.
International Humanitarian Law envisaged a number of regulations on the nature and manufacture of weapons, which determine the qualifications required to ensure the proper use of weapons under humanitarian law. According to Article 36 of the Additional Protocol to the Geneva Conventions of 1949, the compliance of a weapon with the provisions of humanitarian law and other international law should be investigated by the state concerned during the manufacturing phase. This provision places the state under obligation. Similarly, in order for a weapon to be considered "legal," it must be capable of distinguishing between combatants and civilians. The International Court of Justice stated in its Nuclear Weapons Advisory Opinion that this distinction is one of the cardinal principles of humanitarian law. The use of weapons that cannot make this distinction is prohibited by Article 51, paragraph 4 (b) and (c) of the Additional Protocol to the aforementioned Convention. The use of any weapon is prohibited in the same article if it cannot be directed to a specific target and the damage caused by it cannot be controlled. Concerns are raised by the fact that autonomous weapon systems can distinguish between belligerents and civilians. Indeed, the deaths of dozens of civilians as a result of drone attacks in recent years demonstrate the validity of these fears.
As a result, it is clear that autonomous weapon systems are liable in both jus ad bellum and jus in bello for the wrongful act of using force. However, whether or not the use of weapons is illegal in general is a question that can only be revealed in state practice by evaluating against the aforementioned criteria.
As for individual responsibility, there are several parties who can be considered individually responsible for the misuse of AWS, including combatants; military commanders; programmers; and the AWS designer. The discussion of individual liability in this paper is limited to individual liability in International Humanitarian Law so that the parties to be discussed are combatants and military commanders. In International Humanitarian Law, individual accountability includes proving the mental elements or mens rea, and physical elements (Clark, 2001). Provisions regarding mental elements are contained in Article 30 of the Rome Statute of the International Criminal Court (hereinafter referred to as the 1998 Rome Statute), which states that mental elements consist of intent and knowledge. What is meant by intention is the individual's intention to engage in an act of violation, to cause the consequences of a violation, or to be aware that a violation will occur. On the other hand, knowledge is the individual's knowledge of the act of violation or knowing that the consequences will occur. In terms of physical elements, a criminal act committed by an individual must meet the elements of crime from the crime committed, and based on Article 25 paragraph (3) of the 1988 Rome Statute. If an individual is proven to have fulfilled the mental and physical elements of a crime, then that person must be individually responsible.
Combatants as parties to an armed conflict must comply with the provisions of International Humanitarian Law, including provisions that prohibit or limit the use of means and methods of war. This is in accordance with the provisions in Article 28 of Rome Statute of the International Criminal Court. A combatant may be held liable for the use of AWS if the combatant is aware that the operation of AWS would violate the provisions of International Humanitarian Law. In addition, military commanders can also be held individually responsible, if the operation of AWS violates the provisions of International Humanitarian Law (Sehrawat, 2020). This is because the military commander is the party who decides whether or not AWS will be launched in an armed conflict. In addition, military commanders can also be held responsible for the mistakes of their subordinates, as military commanders are obliged to control the behavior of their subordinates. This provision applies if the military commander knows that his subordinates will make a mistake but does not take the necessary action to prevent it.
The concept of command responsibility in Additional Protocol I of the Geneva Convention 1977 is regulated in Article 86 paragraph (2) of AP I which essentially states that if a subordinate commits a violation, it will not immediately release the superior/commander from punishment (Dahl, 2021). The reasoning behind this is that if a subordinate commits a violation, it is likely that the command from the subordinate should have known or at least had information that his subordinate had the potential to commit a violation, and the command should have prevented or suppressed the violation.
When referring to the provisions in Article 86 paragraph (2) of AP I, it is determined that basically "the fact a breach of the conventions or of this Protocol was committed by a subordinate does not absolve his superiors from penal or disciplinary, as the case may be, if they knew, or had information which should have enabled them to conclude in the circumstances at the time, that he was committing or was going to commit such a breach and if they did not all feasible measures within their power to prevent or repress the breach" (McCarthy, 2017). With regard to the correlation between superiors and subordinates which is not covered by this article, it has been further formulated in article 28 letter (b) of the Rome Statute. Based on the provisions of Article 86 paragraph (2) AP I Geneva Convention 1977, Article 28 (a) of the Rome Statute and Article 28 (b) of the Rome Statute, it can be summarized that the elements of command responsibility are as follows: (a) There must be a relationship between the commander and his subordinates who are suspected or reasonably suspected of committing the crime concerned. The phrase relationship refers to the meaning that the command and subordinates have a common task in a military environment where the relationship is vertical with the commander as senior while his subordinates are as a junior in that environment.
(b) The commander concerned has actually and effectively given effective command or supervision to subordinates who are suspected or reasonably suspected of committing the crime in question. The things that are intended as effective supervision from a commanding officer to his subordinates are when his subordinate commits a crime, a commander actually has the material ability to take precautions so that the subordinate does not commit a crime or other preventive actions such that the commander is actually able to report the problem to the authorized party.
(c) The commander knows or is deemed to have known that his subordinates were about to commit or had committed a crime. Based on the provisions of the International Criminal Court (ICC) regarding the phrase "a commander knows or is deemed to have known", in the future this phrase will not only be included but must be able to be proven at trial.
(d) The commander in question has not succeeded in carrying out the logical and necessary steps to prevent, take action, or even submit the issue/crime to the official who has the authority to carry out further investigations. A commander will be branded as having neglected to exercise control over his subordinates so that the crime occurs.
In order to determine the extent to which a commando can be responsible for the operation of AWS, the above elements will then be linked to the classification of Autonomous Weapon Systems compiled by Human Rights Watch (HRW). The classification according to HRW is based on the a quo level of autonomy. The purpose of this classification is to categorize various forms of Autonomous Weapon Systems. The classifications are as follows (Scipione, 2021): (a) The first category is "Human-in-the-Loop Weapons" which is defined as a weapon that can select a specific target or target group and transmit power only by human command when activated. These weapons can be categorized as Semi-Autonomous Weapon Systems.
(b) The second category, "Human-on-the-Loop Weapons," is a weapon system that can freely select and attack specific targets. No humans had to decide whether that specific target would be involved, but there were humans who could step in to stop operations if needed.
(c) The third category, "Human-out-of-the-Loop Weapons," is defined as a weapon system capable of selecting targets and transmitting force without human input or interaction. This weapon system has been programmed in such a way as to autonomously select individual targets and attack them in a pre-programmed selected area for a certain period.
In order to determine the extent to which a commander can be responsible for the operation of AWS, the elements of applying command responsibility need to be linked to the Autonomous Weapon Systems classification that has been compiled by Human Rights Watch (HRW). Based on these two bases, it can be concluded that the limitation of command responsibility will only apply to weapons that are classified as "Human-in-the-Loop Weapons" and "Human on-the-Loop Weapons". The "Human-in-the-Loop Weapons" category can select individual targets or specific target groups and send power only by human commands. Likewise in the case of the "Human-onthe-Loop Weapons", the weapon can freely choose and attack certain targets, but in the middle of the operation, there are humans who can intervene to stop the operation if this is necessary. With regard to weapons in these two categories, a commander can still be held accountable if the use of these weapons results in war crimes. For example, with a drone that is still controlled by a remote, the remote is controlled by a military force that moves on the orders of its commander. So if a soldier who is subordinate to a commander commits a war crime by using drones, in that case, the principle of command responsibility can be applied. Nevertheless, the principle of command responsibility is still debated as to whether it can be applied to AWS weapons that fall into the "Human-out-of-the-Loop Weapons" category. This is because this weapon system can select targets and send forces without human input or interaction (Boyles, 2021). However, the application of the principle of command responsibility can possibly be applied if a command or subordinate of the command concerned performs the process of inputting algorithmic data regarding the target of the weapon.

Legal review of autonomous weapon under international humanitarian law
International humanitarian law provides several legal instruments that refer to the need to review new weapons technology so that later it can be known whether or not the new weapons are in accordance with the general provisions of International Humanitarian Law regarding restrictions on the means and methods of war (Seharwat, 2020). The first international instrument that stipulates the importance of reviewing the legality of new weapons is The Declaration of St. Petersburg of 1868 which regulates as follows: "The Contracting or Acceding Parties reserve to themselves to come hereafter to an understanding whenever a precise proposition shall be drawn up in view of future improvements which science may affect in the armament of troops, in order to maintain the principles which they have established, and to conciliate the necessities of war with the laws of humanity." In this regard, Article 36 of Additional Protocol I 1977 provides similar provisions, Article 36 states that: "In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party." The provisions of this Article require States parties to conduct legal reviews of new weapons, means, and methods of warfare at various stages of development and deployment of weapons, in order to ensure that the use of weapons complies with the provisions of Additional Protocol I 1977 or other applicable rules of international law. AWS is a new type of weapon that can be legally reviewed under Article 36 of Additional Protocol I 1977. This is because, the scope of new weapons, means, and methods of warfare in that Article is very broad so that it can cover all types of weapons, either antipersonnel weapons or anti-material, lethal or non-lethal, as well as weapon systems.
In terms of procedures for conducting a legal review of new weapons, Article 36 of Additional Protocol I does not regulate this matter further. However, in practice, a legal review can be done by assessing the design, characteristics of weapons, and how these weapons are used based on international treaties that bind the country, customary international law, or other relevant rules, such as the general provisions of International Humanitarian Law regarding the means and methods of war. And also special rules of International Humanitarian Law that prohibit or limit the use of certain means and methods of warfare. However, if there is no treaty or customary international law that is relevant to the weapon being reviewed, the Martens Clause can be preserved which includes humanitarian principles and public conscience (Leisure, 2021). Furthermore, the general principles of International Humanitarian Law that govern the means and methods of war in armed conflict are based on the principles of proportionality, distinction, military necessity, and limitation.
The principle of proportionality as ruled in Article 51(5) of Additional Protocol I prohibits any form of attack on military targets if civilian casualties or losses are predicted to exceed or are not proportional to the expected military gains. In an effort to fulfill this principle, there is a precautionary principle that requires military commanders to take all necessary precautions when launching and planning attacks (Marchant, 2020). This is to avoid and minimize the loss of civilian life and damage to civilian objects. The US Air Force maintains that when it comes to determining the proportionality of an attack, that determination is an inherently subjective determination and is resolved on a case-by-case basis. In this regard, The ICRC commentary stated that making decisions regarding the proportionality of attacks, was based on common sense and good faith of the military commanders (Szpak, 2020). In addition, based on its characteristics, AWS does not have the same level of ability as humans in determining proportionality in an attack, besides that it is very difficult for AWS to still comply with the proportionality principle, along with managing a number of data and unexpected scenarios that are different from pre-programmed weapons operational systems. This is because the operating system of weapons formed in the development stage does not include provisions in what limits an attack can be carried out or not or a standard that is considered proportional for an attack to be carried out.
The principle of distinction requires parties to a conflict to be able to distinguish between civilians and personnel of the armed forces at all times as well as civilian and military objects (Winter, 2020). Article 52 of Additional Protocol I establish the universally accepted definition of a military object, which, by definition, applies to both international and non-international armed conflict. A lawful object of attack is one that, by virtue of its nature, location, purpose, or use, effectively contributes to the enemy's action, and whose neutralisation, destruction, or capture would provide a military advantage. It is important to note that even if there is a military object that can be legally targeted, the core rules of proportionality and precaution must still be followed. This principle prohibits the use of weapons that cannot distinguish between legitimate military targets or not. The type of analysis required to fulfill the principle of distinction is a very complex and highly contextual analysis. In this regard, Benjamin Khrisnan argues that "distinguishing between a harmless civilian and an armed insurgent could go beyond anything machine perception could possibly do" (Ghasemi, 2014). AWS does not have the qualifications possessed by humans to identify whether a soldier has become hors de combat in a complex and highly contextual situation, to assess and understand an individual's emotional state, and to assess the situation referred to as harmless. Armed forces personnel can assess the entire context thoroughly, whereas AWS in its operation will only depend on certain visions or aspects due to its programming. In terms of the operational system of weapons, there is also no clear provision or characterization of how the civilian population and hors de combat must act, or behave in order to be distinguished from legitimate military targets.
The principle of military necessity justifies conflicting parties to use all forms of violence necessary to achieve or obtain definite military advantages, namely weakening or defeating enemy forces. In its implementation, the principle of military necessity will be limited by other additional principles that must also be met, namely the principle of proportionality and the principle of limitation (Cotter, 2018). Based on this principle, AWS must first identify the military target, then assess whether striking the military target can produce a definite military advantage. If AWS cannot identify whether the target is a legitimate military target or not (such as civilians, national heritage, medical facilities, and civilian objects), then AWS cannot then decide whether striking the military target will result in an immediate and definite military advantage.
The principle of limitation stipulates the rights of the conflicting parties to choose and use the means of war to injure their opponents. Provisions in International Humanitarian Law prohibit or limit the use of weapons which by their nature and characteristics cause excessive injury and unnecessary suffering, are non-discriminatory, causing excessive civilian losses and casualties as well as environmental damage in the long term, and are fraudulent or treacherous (Law et al, 2019). In this case, if AWS continues to be used as a means of warfare, it will violate the principle of limitation. This is because AWS by its nature and characteristics cannot meet the provisions of the proportionality principle and the principle of distinction.
In the following discussion, the legal review of AWS will be based on an international agreement, which is considered to be the closest to the characteristics of AWS, namely the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (hereinafter called the 1980 Convention on Conventional Weapons; Anderson & Waxman, 2017). There is general agreement among participating countries that effective human control or an appropriate level of human judgment must be maintained in the use of a weapon system to meet legal and ethical requirements. This is of course difficult to fulfill by the characteristics of AWS because human involvement is limited to the development and startup stages, while the AWS operation stage does not require human intervention (Altmann & Sauer, 2017). This will pose a real threat if AWS encounters a failure in its operating system, as there will be no room for humans to intervene.
As explained earlier, that legal review can also be based on the Martens Clause. The Martens Clause contained in the Preamble of the Hague Convention IV Respecting the Laws and Customs of War on Land (hereinafter referred to as the 1907 Hague Convention IV) reads as follows: "Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience." Based on these provisions, the Martens Clause is a clause intended for an issue that is not regulated in the provisions of International Humanitarian Law, so that if there is a void or gap in positive law, the solution taken must be based on basic humanitarian principles and general awareness (Mauri, 2020). The purpose of this clause is to prevent the possibility of submitting matters that have not been regulated to the arbitrary opinion of the commanders. The principle of humanity requires humane treatment of other individuals, as well as respect for human life and dignity. By its characteristics, AWS fails to respect human dignity, as it bases the determination of human life and death, or targeting of attack targets, on computational algorithms embedded in computer systems. The characteristics of AWS also contradict general awareness, because it has the concept of a weapon system in which the use of force and attacks are carried out beyond human control.

Conclusion
The use of AWS is potentially lead to the violation of certain general principles of international humanitarian law such as the principle of proportionality, distinction, military necessity, and limitation. The current AWS operating systems is still in question as to whether they have a capability to make assessments on certain complex situations, such as evaluating the proportionality of an attack with taking the necessary precautions to limit civilian losses and casualties as well as distinguishing objects or civilians from military targets as ruled by Article 51(5) and Article 52 of Additional Protocol I. Furthermore, according to the military necessity principle, AWS must first identify the military target before determining whether striking the military target will result in a definite military advantage. In the event of AWS cannot determine whether the target is a legitimate military target or not, it cannot subsequently determine whether striking the military target will result in an immediate and definite military advantage. International humanitarian law provisions prohibit or limit the use of weapons that, by their nature and characteristics, cause excessive injury and unnecessary suffering, are non-discriminatory, causing excessive civilian losses and casualties as well as long-term environmental damage, and are fraudulent or treacherous. In this case, continuing to use AWS as a weapon will violate the principle of limitation. This is due to the fact that AWS, by definition, cannot meet the provisions of the proportionality and distinction principles.
In addition, AWS does not have a sufficient level of human intervention, as human intervention is limited to the development and startup stages. This creates a real threat if AWS experiences an operating system failure. In some circumstances, AWS fails to respect human dignity and goes against common sense, as it bases human life and death on algorithmic calculations, and its use of force and attacks is beyond human control. It is therefore critical to have a human presence to control the usage of AWS in order to decrease the errors which it causes on the battlefield. To evaluate the extent to which a commander can be held accountable for the operation of AWS, the aspects of command responsibility must be connected to Human Rights Watch's classification of Autonomous Weapon Systems. Based on this connection, it is reasonable to assume that the limitation of command responsibility will apply exclusively to weapons defined as "Human-in-the-Loop Weapons" and "Human on-the-Loop Weapons." Finally, the idea of command responsibility is still being questioned as to whether it can be applied to AWS weapons classified as "Human-out-of-the-Loop Weapons." Nevertheless, the principle of command responsibility could be enforced if a command or subordinate of the command in question undertakes the procedure of inputting algorithmic data regarding the weapon's target.

Funding
This work was supported and funded by the Directorate General of Higher Education, Ministry of Research, Technology, and Higher Education, the Republic of Indonesia.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Citation information
Cite this article as: Command responsibility of autonomous weapons under international humanitarian law, Yordan Gunawan, Muhamad Haris Aulawi, Rizaldy