Information
Equipo Nizkor
        Derechos | Equipo Nizkor

02Feb21


Command Responsibility: A Model for Defining Meaningful Human Control


By Matthew T. Miller |*|

In the relatively near future, the United States and other countries are likely to develop varying levels of artificial intelligence (AI) and integrate it into autonomous weapons. |1| There are significant voices, spearheaded by The Campaign to Ban Killer Robots, advocating for a preemptive ban on these weapons. |2| The opponents of lethal autonomous weapon systems (LAWS) argue that it is unethical to allow a machine to decide when to kill and that AI will never be able to adhere to International Humanitarian Law (IHL) obligations. |3| Although this opposition campaign has not yet achieved its goal of a ban, it has prompted considerable debate over the legality of developing and using LAWS. One of the concepts that has arisen in this debate is a legal requirement for meaningful human control (MHC) over LAWS. |4| The idea of MHC has gained traction within discussions at the United Nations Convention on Certain Conventional Weapons (CCW), but the concept has its detractors. |5|

One of those detractors is the United States, whose delegation to the CCW Group of Governmental Experts continues to warn that MHC is an ambiguous term that "obscures rather than clarifies the genuine challenges" related to LAWS. |6| Instead of human control, the U.S. argues that the key issue is ensuring "machines help effectuate the intention of commanders and the operators of weapon systems." |7| The U.S. Department of Defense showed its focus on intent, rather than control, by adopted the policy that "autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." |8|

The difference between meaningful human control and appropriate levels of human judgment may seem trivial to some, but it demonstrates the ambiguity of MHC. Using an ambiguous term can be useful to gain political and diplomatic consensus, |9| but it has little value when attempting to apply the term as a legal obligation. |10| The United States and others may interpret MHC to require measures that effectuate command intent and maintain human judgement over the use of force, while States that are more hesitant about LAWS may interpret MHC to require direct human control of every possible action by the weapon. |11|

The purpose of this paper is to provide a solution to this ambiguity and offer a workable definition of MHC. The overall thesis is that MHC should be defined as the control necessary to facilitate responsible command. Commanders do not have direct control over each engagement. Rather, command responsibility is based upon a leader's broader control of military operations and responsibility for her forces' adherence to IHL. |12| Therefore, MHC should require that a LAWS be designed to ensure commanders can: 1) understand the capabilities and limitations of the LAWS and convey this information to their forces; 2) limit, at a minimum, the time and space in which the LAWS will operate; and 3) effectively investigate the causes of a LAWS taking unexpected action. |13| Defining MHC through this lens of command responsibility will provide states with a clearer standard that is grounded in a well-developed IHL concept.

To explain how the command responsibility model can be applied to MHC, the paper will begin by defining LAWS and providing an overview of the ways in which humans can interact with autonomous systems. This first section will also describe how a common method for understanding human-machine interaction is to look at where humans are located in the system's decision loop: providing direct input "in the loop"; providing supervision "on the loop"; or being "out of the loop" and unable to provide input. |14| Section II will outline the fundamental IHL principles that are most relevant to LAWS: military necessity; distinction; proportionality; precautions in the attack; and command responsibility.

After the explanation of the key concepts in autonomy and IHL, section III will merge these concepts to demonstrate how MHC can be applied to the design and use of LAWS through the lens of command responsibility. This section will use vignettes to analyze how the level of human control necessary to facilitate responsible command will vary, depending on the capabilities of the LAWS and the circumstances in which it will be used. Section IV will conclude with a discussion on how the command responsibility framework can address concerns that the use of LAWS will prevent accountability for IHL violations. Specifically, this section will argue that a commander's obligations to train her forces and investigate and remediate potential IHL violations will allow for accountability even if a LAWS performs an unforeseen action.

  • I.    Autonomous Systems and Human Interaction in the Decision Loop

The first step in discussing MHC is to provide a working definition of a LAWS. There remains some debate over this topic and, even after five years of work, the CCW Group of Governmental Experts has yet to agree on a definition. |15| Opponents of LAWS define autonomy as a machine that acts on its "own deliberations, beyond the instructions and parameters its producers, programmers, and users provided to the machine." |16| This definition implies that it is impossible to apply human control to LAWS, because its actions cannot be contained by its programmers or operators.

The U.S. Department of Defense defines autonomous weapons as those that, "once activated, can select and engage targets without further intervention by a human operator." |17| The International Committee of the Red Cross (ICRC) similarly defines fully autonomous weapons as those that "can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention." |18| Unlike the definition offered by LAWS opponents, the U.S. and ICRC definitions do not remove the possibility that humans may retain some ability to control a LAWS' actions. Therefore, these two later definitions provide a more effective starting point for the analysis of MHC.

A common method for understanding the range of possible human control over autonomous systems is to look at where humans are located on the autonomous system's decision loop: 1) in the loop; 2) on the loop; or 3) out of the loop. |19| When a human is in the loop, an autonomous system needs human input before acting. This would commonly involve the human identifying a target or giving permission before the weapon can fire. Current "in the loop" systems are guided munitions, such as GPS-guided bombs and cruise missiles, that use autonomous guidance systems to attack a human-selected target. |20| Since an "in the loop" system requires direct human intervention, it does not satisfy the various definitions of LAWS and instead is considered semi-autonomous. |21| It has already become common practice to use semi-autonomous weapon systems and they are not the focus of this paper's discussion.

A LAWS with humans on the loop will not require direct human input or permission before acting. |22| Instead, the LAWS will select and attack targets while a human monitors the weapon's performance and intervenes to halt its operation, if necessary. |23| The U.S. Patriot Air Defense system is an analogous example of how this kind of "human-supervised" LAWS would operate. In automatic mode, the Patriot selects and engages targets unless the human operator intervenes to abort the launch. |24|

The final category of human-machine interaction is humans out of the loop, which provides no opportunity to intervene in a LAWS' individual acts. Once an operator employs an "out of the loop" LAWS, the system will independently identify, select, and attack targets in accordance with its programming and any additional parameters that have been put in place by the operator. |25| Key military advantages of "out of the loop" LAWS are their ability to operate far faster than a human |26| and accomplish a mission in an environment where enemy jamming cuts off communication with the weapon. |27|

In exchange for these advantages, an "out of the loop" LAWS removes direct human control. This absence of direct control is the crux of the debate over what level of control is necessary to uphold IHL obligations. The next section will outline those legal obligations.

  • II.    International Humanitarian Law Obligations and Command Responsibility

When using a weapon, a military's IHL obligations are guided by the principles of military necessity, distinction, proportionality, and the duty to take precautions in the attack. |28| Although there are other aspects of IHL that may be applied to the use of force, these four obligations are the most relevant to the questions surrounding LAWS. |29|

Military necessity allows the use of all measures needed to defeat the enemy as quickly and efficiently as possible. |30| However, a military's ability to use force is not unlimited and military necessity does not justify measures that are otherwise prohibited by the laws of war. |31| The remaining IHL principles can be viewed as limitations on military necessity.

The principle of distinction limits military necessity by requiring combatants to only direct their attacks at military targets. |32| Weapons cannot be inherently indiscriminate, by which their nature prevents them from being directed at military targets. |33| Combatants must also use weapons in a manner that differentiates between combatants and civilians, military objectives and civilian objects, and other categories of protected persons and objects. |34|

Although the principle of distinction prohibits directly attacking non-military targets, military necessity justifies incidental harm that is necessary for the successful defeat of the enemy. |35| As such, collateral damage is a tragic but inherent reality of war. |36| The principle of proportionality acknowledges this reality and focuses on ensuring that collateral damage is not excessive. Proportionality requires combatants to refrain from an attack in which the expected loss or injury to civilians and damage to civilian objects incidental to the attack would be excessive compared to the concrete and direct military advantage expected to be gained. |37|

The duty to take precautions in the attack is closely related to the principles of distinction and proportionality. Combatants must do everything feasible to verify that a target is a legitimate military objective. |38| Combatants must take constant care to avoid or minimize collateral damage, when feasible. |39| When planning the means and methods of attack, combatants must first evaluate which weapons or tactics satisfy proportionality and achieve the desired military advantage. If more than one weapon or tactic would achieve the desired advantage and satisfy proportionality, the principle of precautions mandates the option with the least risk of collateral damage. |40|

The legal obligations discussed above belong to all combatants, but commanders have a special role within the IHL framework. Commanders have fundamental control over military operations and, as such, have responsibility for their forces' adherence to IHL. |41| Commanders uphold IHL obligations through proper planning, battlefield decision-making, and the application of reasonable controls over their subordinates' use of force. Common control measures may include issuing rules of engagement, applying geographic and time constraints to operations, designating protected areas that may not be attacked, and raising the level of authority necessary to approve measures with significant collateral damage concerns. |42|

Commanders also have the responsibility to ensure subordinates understand their IHL obligations and establish a climate that upholds those obligations. |43| Commanders achieve this responsibility through proper training of their forces in IHL and by reporting alleged war crimes to competent authorities to ensure investigation and appropriate action to punish crimes and prevent future IHL violations. |44|

These broad command authorities and obligations constitute the basis for using command responsibility as an IHL compliance mechanism. Individual combatants, to include commanders, may be criminally prosecuted for their own IHL violations. |45| Command responsibility provides an additional framework where commanders may also be held criminally liable for their subordinate forces' crimes. Commanders can be held liable under the theory of command responsibility if they knew, or should have known, about the situation and failed to take necessary and reasonable measures within their power to prevent, report, and punish IHL violations. |46|

Commanders not only have a duty to act once they know about a potential problem, they also have a duty to seek out information that is reasonably available to them. |47| This duty prevents commanders from unreasonably relying on assurances from their superiors or subordinates when the commander should have known the information was not reliable.

When looking at superior-subordinate issues, it is also important to understand that command responsibility does not solely rest upon the lowest-level commander. Militaries are organized with many levels of command, ranging from the front-line commander to a state's commander-in-chief. Command responsibility applies to all levels of command and senior civilian leadership of the military. |48|

  • III.    Applying Command Responsibility to Meaningful Human Control

As discussed above, a commander's IHL obligations are not defined by her direct control over each use of a weapon; over each pull of the trigger. |49| Instead, a commander's IHL obligations are based upon her control over the whole military operation or attack. |50| Therefore, viewing MHC through the lens of command responsibility does not necessarily require direct human control over each of a LAWS' uses of force. Instead, MHC would require LAWS to be designed to allow commanders to apply controls to the overall use of the weapon that are necessary and reasonable to prevent IHL violations.

To analyze what controls are necessary and reasonable, a commander must understand the capabilities of the LAWS. Georgetown Law Professor Michael Meier, who serves as the senior civilian law of war advisor to the U.S. Army Judge Advocate General, emphasizes that "when looking at the lawful use of an autonomous weapon, the first thing a commander must consider is what the platform was designed to do and what testing has shown the platform to be able to reliably and consistently do." |51|

This information will be obtained when a LAWS is tested prior to a State's review of the new weapon system. Article 36 of Additional Protocol I requires States to determine whether new weapons "would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law." |52| Part of this review involves determining whether the new weapon would be inherently indiscriminate or require legal restrictions on its use. |53| To make this determination, States need to test the technical performance of the weapon to assess its accuracy, reliability, and foreseeable effects when used for its intended purpose. |54|

This initial need to understand a LAWS' capabilities is no different than with any new weapon that enters a military's arsenal. However, the complexity of AI will likely require national-level command to implement a significant training regime before allowing commanders to use LAWS in combat. |55| To appropriately comply with the key issues of distinction and proportionality, commanders will need to understand how reliably an autonomous system can identify military targets and its tested rates of falsely identifying civilian objects as military targets. |56|

A LAWS' rate of false positives would not necessarily make it inherently indiscriminate, unless the rates are so high that commanders could not direct it at a military target under any battlefield conditions. |57| However, false positives could limit the circumstances in which a commander could lawfully use a LAWS. False positives would also require a commander to apply sufficient control measures, and other feasible precautions, to ensure the LAWS was used in a manner that satisfies distinction and proportionality.

To discuss the types of control measures that may be necessary for MHC, consider the use of a notional LAWS that is designed to destroy enemy tanks. After considerable testing in real world conditions, this LAWS is shown to reliably and consistently identify enemy tanks and destroy them with precision-guided missiles. If this LAWS had a false-positive rate that was significantly lower than a human's, a commander may conclude that she could use the weapon in accordance with IHL obligations with minimal controls. If fact, just as commanders use precision-guided weapons to minimize collateral damage, a commander may be able to consider the use of an exceptionally reliable LAWS as a means to fulfill her requirement to take all feasible precautions in the attack. |58| However, for purposes of this scenario, the paper will assume testing shows the LAWS to have a false positive rate slightly worse than a human's.

In order to use the anti-tank LAWS in accordance with IHL obligations, MHC would require the commander to at least be able to apply geographic and time limits to the LAWS' actions. Professor Meier acknowledges that "a commander asserts a lot of discretion over the use of force through the planning process by implementing precautions that reduce risk and ensure an attack meets proportionality standards." |59| In order to properly plan the use of a LAWS in an attack, a commander must at a minimum be able to dictate when and where it will operate.

For example, enemy tanks may use major roadways to quickly travel around the battlefield. These roads may intersect with towns or cities with dense civilian populations. If the LAWS' false positive rate presents an excessive risk to civilians and civilian objects, then the commander could limit the LAWS to operating on parts of the road that are far from towns. This limitation would maximize the LAWS' ability to verify enemy targets and satisfy proportionality by ensuring the risk to civilians was not excessive. Where feasible to meet the mission objectives, the commander could also take the precaution of limiting the LAWS to operating during times when civilian traffic on the road is low. By containing the LAWS to a part of the road, the commander can further reduce risk to civilians by planning concurrent operations, such as roadblocks, that would prevent civilians from entering the area in which the LAWS is operating.

Controlling the area and timing of LAWS operations is also essential for conducting the required comparative analysis of means and methods of the attack. While planning this operation, the commander may consider other weapons, such as the AH-64 Apache attack helicopter, that could also destroy the enemy tanks along the road. |60| Even if the use of helicopters presents a lower risk of collateral damage, they may not provide the same military advantage as the LAWS. Rather than spreading her helicopters across the battlefield, the commander may want to use LAWS on roads so she can focus the helicopters on supporting her infantry in the towns, where human pilots are needed to better discriminate between enemy forces and the dense civilian population. The commander can conduct this kind of planning only if she can constrain the LAWS to only operating in an area where the attack would satisfy proportionality.

If the commander wishes to use the anti-tank LAWS closer to the towns, MHC may require that she be able to apply additional controls beyond geography and time. To ensure compliance with the principles of distinction and proportionality, a commander could identify specific areas that the LAWS may not fire upon, such as highly populated parts of the town or protected medical and religious buildings. |61| The US military already uses digital systems to implement these types of controls across the battlefield and provide safeguards against combatants inadvertently attacking a protected location. |62| If the LAWS is able to access these digital safeguards, that may provide commanders with the necessary and reasonable control needed for MHC.

Depending on the capabilities and false-positive rates of the anti-tank LAWS, the above control measures may still be insufficient for a commander to reasonably prevent disproportionate attacks in towns. If the commander cannot rely on the LAWS to distinguish between enemy tanks and civilian vehicles in an urban environment, then the commander would need more direct control over LAWS in order to prevent indiscriminate or disproportionate attacks.

To address the need for more control, Professor Meier foresees the possibility that militaries may utilize LAWS in complex environments by relying on human-machine teaming. "Human-machine teaming will allow the military to rely on the relative strengths of both humans and artificial intelligence." |63| Depending on the capability of an AI, a human may have a greater ability to identify irregular enemy forces and conduct a proportionality analysis for each engagement. |64| But like modern precision munitions, the LAWS could be able to engage an approved target faster and more accurately than the human. |65| This type of teaming would necessitate the use of either a semi-autonomous weapon or an "on the loop" human-supervised LAWS. |66|

When planning for the use of human-supervised LAWS, commanders will need to take into consideration the growing threat of enemy jamming. |67| As discussed in section I, the risk of jamming provides incentive for employing "out of the loop" autonomous systems that can accomplish an attack even when cut off from human operators. However, if MHC under certain circumstances requires human supervision over LAWS, jamming presents the risk that commanders may not be able to maintain that supervision.

To maintain MHC in an area with jamming, "on the loop" LAWS may need to be designed to allow commanders to dictate what actions the LAWS should take if cut-off from human supervision. If a commander determines that the circumstances of a mission legally justify the use of the LAWS without human supervision, then the commander could instruct the LAWS to continue mission in the event of a breakdown in communication. If the circumstances require human supervision to uphold IHL obligations, then the commander will need to dictate that the LAWS stop attacking targets if cut-off. |68|

If an "on the loop" LAWS cannot be protected against jamming or programmed with cut-off instructions, then commanders will likely need to plan operations in jammed environments as if the LAWS was fully autonomous. As discussed above, this will not prevent a commander from ever using the LAWS, but it will restrict the circumstances in which the commander may determine the use is lawful.

  • IV.    Command Responsibility and Accountability for Unintended Actions

Command responsibility not only provides a lens through which to view MHC, but also a method for ensuring accountability if a LAWS performs an action that may violate IHL obligations. If that occurs, the commander will have a duty to report the incident to appropriate authorities and conduct an investigation. |69| Due to the complexity of AI, this duty to investigate will likely belong to a high echelon of command with access to necessary subject-matter experts. |70| This investigation would allow higher command to assess whether the relevant commanders applied appropriate controls over the operation of the LAWS. If the commander failed to take necessary measures to prevent the LAWS' inappropriate use of force, then she may be held criminally liable. |71|

Investigations would also need to determine whether commanders satisfied their mutually supporting duties to properly train their subordinates and seek out information that is reasonably available to them. |72| If a commander claims that she used LAWS in a certain manner because higher authority provided an inaccurate assessment of the weapon's reliability, this will likely not absolve everyone of liability. In that circumstance, the higher commander may be disciplined for failing to properly train her subordinates on the weapon's capabilities. Or, if the facts show that the lower commander should have known of the LAWS' limitations, then command responsibility could hold her liable for what she reasonably should have known. |73|

Even among those who recognize the applicability of command responsibility, there are arguments that autonomous weapons create a loophole in the disciplinary system. |74| Even when commanders use a LAWS for the purpose for which it was designed and tested, and apply all reasonable control measures, the AI may make a completely unforeseeable decision. Under those circumstances, the commander could not be liable for the action because she did not have reason to know that it would occur. |75|

This scenario is only possible the first time the autonomous weapon takes unintended action because that is the only time it would be truly unforeseeable. Also, this scenario is not a loophole. It is an inherent aspect of all technology used on the battlefield. For example, a command may use a satellite-guided bomb to attack a target because that technology has been designed and tested to provide precision attacks. |76| If there is an unexpected error in the guidance system, that bomb may unintentionally strike a nearby civilian object instead the military target. This scenario would be tragic, but likely not a violation of IHL because the commander and pilot did not intend to target the civilian object and reasonably relied on the bomb's precision-guidance technology as a means to avoid or minimize incidental loss to civilian life. |77|

Although the guidance error may not be an IHL violation, it would trigger command responsibility to investigate the incident to determine why the bomb went astray and take all reasonable actions to prevent it from happening in the future. |78| If a commander fails to investigate the incident and continues to use the bombs in circumstances where there is a risk of malfunction, she will be violating her IHL command obligations and could be criminally liable. |79|

Unforeseen incidents would also require the commander to reevaluate her confidence in what the LAWS can reliably and consistently do. In turn, this will change the analysis over whether she applied necessary and reasonable measures to prevent IHL violations. As Professor Meier aptly summarizes:

It all comes down to whether the commander's confidence in the system is reasonable. The first time an accident happens, it may not be a violation of [IHL].

But if it keeps happening and nothing is done to prevent it, a commander will have a difficult time arguing that the problem is unforeseeable. |80|

One final issue associated with accountability of LAWS is the fact that the complexity of AI currently makes it difficult, if not impossible, to reverseengineer the causes for an AI's action. To address this concern, many organizations are working to create "understandable AI," which provides human operators with the ability to review the basis for an AI's actions. |81| This capability will be essential for the lawful use of LAWS, because without it, an investigation will be unable to determine why an AI made an unforeseen decision. Without that knowledge, commanders will likely have only two options: 1) significantly limit the circumstances in which they use LAWS; or 2) determine they can no longer use the weapon lawfully under any circumstances.

Conclusion

The introduction of LAWS on the modern battlefield may appear to strain the IHL framework by having machines carry out a function that has previously only been done by humans - selecting and engaging targets. But the use of LAWS will not allow commanders to abdicate their responsibility to ensure their forces uphold IHL obligations. Commanders will remain obligated to take necessary and reasonable measures to prevent and suppress violations of IHL by their forces. Therefore, MHC should be defined as the control necessary for commanders to satisfy this obligation.

To maintain responsible command, LAWS must be designed to ensure commanders can understand the purpose, capabilities, and limitations of the system. The level of direct control necessary to maintain command responsibility will depend on the purpose and capabilities of the LAWS and the circumstances in which it is intended to be used. At a minimum, MHC requires that a commander be able to apply geographic and time constraints in order to limit a LAWS' use to the circumstances that will uphold distinction and proportionality. To use LAWS in more complex and civilian-saturated environments, MHC may require that commanders have the ability to apply additional control measures or human supervision.

In addition to providing a working definition for MHC, command responsibility also provides a mechanism for accountability when using LAWS. Commanders may be held criminally liable if they failed to properly train their forces on the weapon's reliability or failed to apply the types of controls necessary to prevent IHL violations. If a LAWS takes an unforeseeable action, despite commanders taking all necessary precautions, commanders may still be criminally liable if they fail to investigate the incident and take action to prevent further unintended uses of force.

However, in order to facilitate this aspect of command responsibility, LAWS will likely need to have "understandable AI" so that investigations are able to determine the causes of the AI's unforeseen actions.

Future advances in AI may provide LAWS with capabilities beyond our imagination, but the nature of IHL will remain the same. The responsibility for armed forces to carry out military operations in accordance with IHL obligations will ultimately rest, as it always has, on the shoulders of commanders. This command responsibility is a well-developed concept in IHL and should provide the framework for assessing MHC.


[Source: Major Matthew Miller, Journal of National Security Law and Policy, Vol. 11, pp. 533-546, Washington DC, USA, 02Feb21]

Notes:

|*| Major Matthew Miller is a Judge Advocate in the U.S. Army and currently serves as the Chief of the Operational Law Branch in the National Security Law Division of the Army's Office of The Judge Advocate General. Major Miller holds a Master of Laws (LL.M) in National Security Law from the Georgetown Law Center and an LL.M. in Military Law from The Judge Advocate General's Legal Center and School. The views expressed in the paper are the author's alone and do not necessarily reflect those of the author's employer. © 2021, Matthew T. Miller. [Back]

|1| Melissa K. Chan, China and the U.S. are Fighting a Major Battle Over Killer Robots and the Future of AI, TIME, Sep. 13, 2019, https://perma.cc/62ZU-4FUZ. [Back]

|2|See Campaign To Stop Killer Robots, https://perma.cc/9RGG-A6ZU (providing an overview of the campaign and its goals). [Back]

|3| Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots 21 (2018), https://perma.cc/9WDZ-X655. [Back]

|4| See Hayley Evans, Lethal Autonomous Weapons Systems at the First and Second U.N. GGE Meetings, LAWFARE (Apr. 9, 2018, 9:00 AM), https://perma.cc/9ARQ-3EHA (discussing numerous states' references to meaningful human control). [Back]

|5| See Karl Chang, U.S. Mission to Int'l Orgs. in Geneva, Consideration of the Human Element in the Use of Lethal Force, Address Before the Convention on Certain Conventional Weapons Group of Governmental Experts on Emerging Technologies in the Area of LAWS (Mar. 26, 2019) (discussing his skepticism over the ability to determine the level of human control that is necessary to comply with International Humanitarian Law). [Back]

|6| Id. [Back]

|7| Id. [Back]

|8| U.S. Dep't of Def., Dir. 3000.09, Autonomy In Weapon Systems2 (Nov. 21, 2012) [hereinafter Dir. 3000.09]. [Back]

|9| See, e.g., Rebecca Crootof, A Meaningful Floor for Meaningful Human Control, 30 Temple Int'lComp. L.J. 53, 54 (2015), available at https://sites.temple.edu/ticlj/files/2017/02/30.1.Crootof-TICLJ.pdf. [Back]

|10| See Merel Ekelhof, Autonomous Weapons: Operationalizing Meaningful Human Control, Int'l Comm. of the Red Cross (Aug. 21, 2018), https://perma.cc/2G2F-PA2P (explaining that abstract concepts about human supervision provide little value if they do not address the reality of military application). [Back]

|11| Crootof, supra note 9, at 54. [Back]

|12| See U.S. Dep't of Def., DoD Law of War Manual § 18.4 (Dec. 2016) [hereinafter Law of War Manual]. [Back]

|13| Id.; see Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, art. 87, June 8, 1977, 1125 U.N.T.S. 3 [hereinafter Additional Protocol I] (discussing commanders' responsibility to ensure their subordinates are aware of their legal obligations and take necessary steps to prevent violations); see generally Law of War Manual, supra note 12, § 19.20.1 (discussing how the United States has not ratified Additional Protocol I, but supports many of its provisions because they comply with longstanding U.S. practice or are based upon customary law principles). [Back]

|14| Paul Scharre, Army of None: Autonomous Weapons and the Future of War 28-30 (2019); Paul Scharre &  Michael C. Horowitz, Center for a New Am. Sec., Working Paper: An Introduction to Autonomy in Weapon Systems 6 (2015), https://perma.cc/T4GP-PEBS. [Back]

|15| Telephone Interview with Michael Meier, Professor, Georgetown Univ. L. Ctr. (Oct. 23, 2019) (conveying experiences as a member of the U.S. delegation to the Group of Governmental Experts) [hereinafter Interview]. [Back]

|16| Amitai Etzioni & Oren Etzioni, Pros and Cons of Autonomous Weapon Systems, Mil. Rev., May-Jun. 2017, at 72, 79, https://perma.cc/M9F8-FWZ2. [Back]

|17| Dir. 3000.09, supra note 8, at 13. [Back]

|18| Int'l Comm. of the Red Cross, Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon System 1 (2016), 1, https://perma.cc/9ZKL-YMGZ. [Back]

|19| Scharre, supra note 14, at 26-34; Scharre &  Horowitz, supra note 14, at 8-14. [Back]

|20| Scharre &  Horowitz, supra note 14, at 8-12. [Back]

|21| Dir. 3000.09, supra note 8, at 14. [Back]

|22| Scharre, supra note 14, at 44 (describing a semi-autonomous weapon that does not need to ask permission before attacking a target, but the human operator can intervene when necessary); Scharre &  Horowitz, supra note 14, at 12-13. [Back]

|23| Scharre, supra note 14, at 44; Scharre &  Horowitz, supra note 14, at 12-13. [Back]

|24| John K. Hawley, Patriot Wars: Automation and the Patriot Air and Missile Defense System, Ctr. for a New American Sec. (Jan. 25, 2017), https://perma.cc/K228-G4M2. [Back]

|25| Scharre &  Horowitz, supra note 14, at 13-15. [Back]

|26| See Michael T. Boulet, The Autonomous Systems Tidal Wave, 22 Lincoln Lab'y J., no. 2, 2017, at 18, 19, https://perma.cc/YX5X-UMHT (discussing how artificial intelligence accomplishes great speed by decoupling humans from decisions and leveraging computing capabilities). [Back]

|27| See Courtney Kube, Russia has Figure Out How to Jam U.S. Drones in Syria, Officials Say, NBC News (Apr. 10, 2018), https://perma.cc/5ZYM-7J4X. [Back]

|28| See Int'l Comm. of the Red Cross, Fundamental Principles of IHL, https://perma.cc/GG8M-SZSC; Law of War Manual, supra note 12, § 2, § 5.2.3. [Back]

|29| U.S. Working Paper, Implementing International Humanitarian Law in the Use of Autonomy in Weapon Systems ¶3, CCW/GGE.1/2019/WP.5 (Mar. 28, 2019). [Back]

|30| Law of War Manual, supra note 12, § 2.2. [Back]

|31| Id.; Hague Convention (IV) Respecting the Laws and Customs of War on Land and its Annex: Regulations Concerning the Laws and Customs of War on Land art. 22, Oct. 18, 1907, 36 Stat. 2277 (declaring that "the right of belligerents to adopt means of injuring the enemy is not unlimited"). [Back]

|32| Law of War Manual, supra note 12, § 2.5; Additional Protocol I, supra note 13, art. 48. [Back]

|33| Int'l Comm. of the Red Cross, Customary IHL Rule 71: Weapons That Are by Nature Indiscriminate, https://perma.cc/P9LJ-MUGS [hereinafter Customary IHL Rule 71]. [Back]

|34| Law of War Manual, supra note 12, § 2.5.2; Additional Protocol I, supra note 13, art. 48. [Back]

|35| Law of War Manual, supra note 12, § 2.2.1. [Back]

|36| Law of War Manual, supra note 12, § 2.4.1.2. [Back]

|37| Additional Protocol I, supra note 13, art. 51(5)(b); Law of War Manual, supra note 12, § 5.12; See Int'l Comm. of the Red Cross, Customary IHL Rule 14: Proportionality in Attack, https://perma.cc/EB49-FH7B; see also U.S. Dep't of Army, Field Manual 6-27, Commander's Handbook on the Law of Land Warfare ¶1-46 (Aug. 7, 2019) (discussing how the U.S. Army and Marine Corps explain this principle to military commanders). [Back]

|38| Additional Protocol I, supra note 13, arts. 57-58; Law of War Manual, supra note 12, § 5.11. [Back]

|39| Additional Protocol I, supra note 13, art. 57; Law of War Manual, supra note 12, § 5.11. [Back]

|40| Geoffrey S. Corn, Eric Talbot Jensen, Victor Hansen, M. Christopher Jenks, & Richard Jackson, The Law of Armed Conflict: An Operational Approach 60 (2d ed. 2019). [Back]

|41| Id. at 597. [Back]

|42| See Nat'l Sec. L. Dep't, The Judge Advocate Gen.'s Legal Ctr. & Sch., U.S. Army, Operational Law Handbook, 79-96 (2018) (providing an overview of how commanders use rules of engagement and other controls). [Back]

|43| Corn et. al., supra note 40, at 596-597 (describing commanders as the focal point of military discipline and the person who must make sure that his unit conducts military operations in compliance with the law of armed conflict). [Back]

|44| Additional Protocol I, supra note 13. [Back]

|45| Corn et. al., supra note 40, at 571-88 (providing an overview of the ways in which a member of the United States military may be prosecuted for violating IHL). [Back]

|46| Int'l Comm. of the Red Cross, Customary IHL Rule 153: Command Responsibility for Failure to Prevent, Repress or Report War Crimes, https://perma.cc/QSB5-HEBH; Additional Protocol I, supra note 13, at arts. 86-87; Statute of the International Tribunal for the Former Yugoslavia art. 7(3), S.C. Res. 827, U.N. Doc. S/RES/827 (May 25, 1993) [hereinafter ICTY Statute]; Statute of the International Criminal Tribunal for Rwanda art. 6(3), S.C. Res. 955, U.N. Doc. S/RES/955 (Nov. 8, 1994) [hereinafter ICTR Statute]; Rome Statute of the International Criminal Court art. 28, July 17, 1998, 2187 U.N.T.S. 3 [hereinafter Rome Statute]. [Back]

|47| Corn et al., supra note 40, at 611. [Back]

|48| Corn et al., supra note 40, at 600-01 (explaining that criminal liability through command responsibility is not defined by level of command, but is derived from a commander's relationship to subordinates); See Rome Statute, supra note 46, art. 28(b) (describing how the command responsibility standard applies to civilian supervisors). [Back]

|49| See U.S. Working Paper, supra note 29, ¶4. [Back]

|50| See id. [Back]

|51| Interview, supra note 15. [Back]

|52| Additional Protocol I, supra note 13, art. 36. [Back]

|53| Law of War Manual, supra note 12, § 6.2.2. [Back]

|54| Int'l Comm. of the Red Cross, Report: A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I OF 1977 ¶ 1.3.2 (2006). See Dir. 3000.09, supra note 8, at 6 (outlining the U.S. policy on ensuring that new LAWS undergo rigorous testing to ensure the systems "function as anticipated in realistic operational environments against adaptive adversaries and are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system"). [Back]

|55| See Dir. 3000.09, supra note 8, at 9-12 (assigning responsibility to the Under Secretary of Defense for Personnel and Readiness, Secretaries of Military Departments, and Combatant Commanders to plan, implement, and verify training for the use of LAWS). U.S. Policy also mandates that these senior leaders ensure that there is adequate training and information on the tactics, techniques, and procedures of the LAWS' use to allow "commanders and operators to exercise appropriate levels of human judgment in the use of force and to employ systems with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable ROE." Id., encl. 3. [Back]

|56| U.S. Working Paper, supra note 29, ¶17 (C). [Back]

|57| Customary IHL Rule 71, supra note 33. [Back]

|58| U.S. Working Paper, supra note 29, ¶8 (C). [Back]

|59| Interview, supra note 15. [Back]

|60| See Sebastien Roblin, Aerial Assassin: Why No Helicopter Can Compare to the AH-64 Apache, Nat'l Interest (Jul. 6, 2019), https://perma.cc/C7ZB-ELUW. [Back]

|61| See Joint Chiefs of Staff, Joint Pub. 3-09, Joint Fire Support, at III-15, GL-4 (Apr. 10, 2019) (describing the tools used to protect certain areas, such as No-Fire-Areas and No-Strike-Lists). [Back]

|62| See Advanced Field Artillery Tactical Data System (AFATDS), U.S. Army (2020), https://perma.cc/83T7-MEDL. [Back]

|63| Interview, supra note 15. [Back]

|64| See Hum. Rts. Watch, Losing Humanity: The Case Against Killer Robots 29 (2012), https://perma.cc/NQ8X-B2BN (discussing how there are doubts that AI will be able to effectively balance the moral and legal aspects of proportionality, even if engineers develop advanced ethical programming). [Back]

|65| Scharre & Horowitz, supra note 14, at 11. [Back]

|66| Interview, supra note 15. [Back]

|67| See Michael R. Gordan & Jeremy Page, China Installed Military Jamming Equipment on Spratly Islands, U.S. Says, WALL ST. J. (Apr. 9, 2019), https://perma.cc/3AXB-P8RE; Kube, supra note 27. [Back]

|68| See Google Developing Kill Switch for AI, Bbc News (Jun. 8, 2016), https://perma.cc/3SNE-CFTB (discussing efforts to allow humans to prevent AI from acting outside of the programmers' intended limits). U.S. policy directly addresses this concern for semi-autonomous systems that are intended to use lethal force and requires these systems to be "designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator. " Dir. 3000.09, supra note 8, at 3. [Back]

|69| Additional Protocol I, supra note 13, art. 87(1); Rome Statute, supra note 46, art. 28(a)(ii). [Back]

|70| Richard J. Sleesman & Todd C. Huntley, Lethal Autonomous Weapon Systems: An Overview, 1 Army L. 32, 34 (2019) (discussing the possibility that all autonomous weapon incidents will require centralized national-level investigation because of the complexities of artificial intelligence). [Back]

|71| Additional Protocol I, supra note 13, art. 86(2). [Back]

|72| Additional Protocol I, supra note 13, art. 87(1). [Back]

|73| Additional Protocol I, supra note 13, art. 86(1); ICTY Statute, supra note 46, art. 7(3); ICTR Statute, supra note 46, art. 6(3). [Back]

|74| See Rebecca Crootof, War Torts: Accountability for Autonomous Weapons, 164 U. Pa. L. Rev. 1347 (2016), https://perma.cc/Y54Q-D7A6. [Back]

|75| Id. at 1379-81. [Back]

|76| See Precision Weapons, Raytheon (2020), https://perma.cc/XQ8F-9SCD (providing examples of GPS-guided munitions). [Back]

|77| Additional Protocol I, supra note 13, art. 57(2)(a)(ii). [Back]

|78| Additional Protocol I, supra note 13, art. 87(1); Rome Statute, supra note 46, art. 28(a)(ii); See U.S. Dep't of Def., Dir. 2311.01, DoD Law of War Program ¶ 4.2 (July 2, 2020) (describing United States policy that commanders must investigate alleged violations of the Law of War when they are based on credible evidence). [Back]

|79| See Rome Statute, supra note 46, art. 28(a)(ii) (discussing how, even if the commander did not have the capability to properly investigate the technical nature of the bomb, she has an obligation to report it to appropriate authority for investigation). [Back]

|80| Interview, supra note 15. [Back]

|81| See Mike Wheatley, Google's Explainable AI Service Sheds Light on How Machine Learning Models Make Decisions, SILICONANGLE (Nov. 21, 2019, 9:10 PM), https://perma.cc/S6Q9-V3FY. [Back]


Equipo Nizkor Radio Nizkor

Privacy and counterintelligence
small logoThis document has been published on 24Jan24 by the Equipo Nizkor and Derechos Human Rights. In accordance with Title 17 U.S.C. Section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes.