International Governance of Autonomous Military Robots (Page 2)

12 Dec 2012

The appearance of lethal and yet autonomous robots will have a significant impact on the use of hard power. As a result, multinational dialogue on how to govern the use of such systems should occur before they are widely applied, or so argues the Autonomous Robotics thrust group.

(Continued from Page 1)

Ethical and Policy Aspects

There are numerous ethical, policy, and legal issues relating to creation and deployment of lethal autonomous robots. Although an exhaustive list of these issues will not be offered here, a number of the key ones will be outlined. Rather than defending a particular point of view on the technology, the primary aim is to promote awareness of these issues and to encourage lawyers, policymakers, and other relevant stakeholders to consider what may be appropriate legal and regulatory responses to LARs as they are being developed.

Responsibility and Risks

Australian philosopher Robert Sparrow has been a prominent voice in debates about the ethics of lethal autonomous robots. For instance, he examines the complexities associated with assigning ethical and legal responsibility to someone, or something, if an autonomous robot commits a war crime.[xxiii] He considers several possible solutions to this puzzle, including “the programmer,” “the commanding officer,” and “the machine” itself, but concludes that each option has its profound share of difficulties.[xxiv] Remaining neutral on whether Sparrow is correct, assigning responsibility to a LAR’s behavior (or misbehavior) is an important matter that warrants further investigation. Along related lines, Peter Asaro doubts whether a robot can be punished in a meaningful way since it is unlikely to possess any form of moral agency, observing that traditional notions from criminal law such as “rehabilitation” and “deterrence” do not seem applicable here.[xxv] One of the principal justifications for relying on autonomous robots is that they would have access to a greater amount of information than any human soldier. Yet this advantage raises key questions, including whether a robot should ever be permitted to refuse an order and, if so, under what conditions. Refusing an order on the battlefield is of course a serious matter, but should a robot be given more latitude to do so than a human soldier?

Battlefield units containing human soldiers and LARs will raise additional ethical issues relating to risk and responsibility. According to Sparrow, “human beings will start to be placed in harm’s way as a result of the operations of robots.”[xxvi] But will soldiers genuinely be aware of the kinds of risks they are being exposed to when working with robotic counterparts? If not, Sparrow fears that soldiers might place too much trust in a machine, assuming, for example, it has completed its assigned tasks.[xxvii] And even if soldiers are fully aware of the risks associated with reliance on robotics, how much additional risk exposure is justifiable?

Legal Status of Civilians

In any treatment of LARs, the question of potential liability to civilians must be considered. Civilians initially responsible for empowering or placement of LARs may not necessarily be absolved of legal responsibility should the LAR perform unintended consequences. For example, it is entirely possible that a failure to recognize and obey an attempt for surrender could invoke a violation of the Law of Armed Conflict (“LOAC”). If a civilian software writer failed to initially write code that recognized the right to surrender through a flag of truce or other means, then a “reach back” to civilian liability for the breach might be possible. Likewise, if the civilian software writer failed in attempts to program identifiable legally protected structures such as places of worship, hospitals, or civilian schools, the code writer may be subject to potential liability in those scenarios as well.

The use of smaller yet lethal robots is gaining acceptance in battlefield operations. It is entirely possible that a small three-foot or less autonomous robot might be thrown or “launched” into an open building or window with lethal gun-firing capabilities much like a whirling dervish.[xxviii] Should this whirling dervish either not be programmed to accept, or fail to recognize, internationally established symbols of surrender such as a white flag, would the robot’s responsible forces, probably including civilian software code writers, escape legal liability for these omissions?

Similarly, well-intentioned and seemingly complete software programming might go astray in other ways. Under Law of War concepts, and specifically the Hague Convention (Hague VIII), it is forbidden to lay unanchored automatic contact mines unless they will become harmless at one hour or less after the mines are no longer under the control of the person (nation) who laid them[xxix]. If one examines unmanned vehicles, particularly unmanned under-water sea vehicles launched from a manned “mother ship,” in the event that the mother ship becomes lost or disabled the unmanned “robotic” ship vehicle must, consistent with international navigational regulations and protocols, be able on its own to comply with the navigational regulations and responsibilities. No matter the degree of robotic autonomy, a legal responsibility is likely to exist for the actions of the un-tethered “loose” unmanned vehicle.

Consider the situation described by P.W. Singer in “Wired for War” in which a robot programmed to perform sentry work failed to identify a threatening action.[xxx] In that scenario in which a bar patron suffers from the hiccups, a robot trained to act as a sentry with accompanying lethal force could reasonably fail to identify the well-intentioned furtive hand gun gesture of the bartender so that the patron might be scared so as to lose the hiccups. If the robot sentry failed to identify this mimicked handgun gesture and mistakenly shot the bartender “thinking” a lethal threat existed, it is conceivable that the software writer of the code might be responsible for the mistaken actions.

The above examples illustrate how seemingly complete autonomous robotic systems may still pose legal liability issues upon the civilians initially responsible for their use within battle space operations. Amidst these scenarios, the civilian software code writer’s work and ultimate responsibilities may enjoy a much longer and unanticipated legal life.

(Continue reading)

[xxiii] Robert Sparrow, Killer Robots, 24 J. Applied Phil. 66 (2007) [hereinafter Sparrow, Killer Robots].

[xxiv] Id. at 69-73.

[xxv] Peter Asaro, Robots and Responsibility from a Legal Perspective, Proceedings of the IEEE 2007 International Conference on Robotics and Automation, Workshop on RoboEthics, April 14, 2007, Rome, Italy.

[xxvi] Sparrow, Building a Better WarBot, supra note 9, at 172.

[xxvii] Id. at 172-73.

[xxviii] For example, the TALON robot has been cited by its manufacturer for its extensive use in military operations in Afghanistan and Iraq. It is small and is capable of a variety of uses including the ability to deliver weapons fire. The manufacturer’s web site specifically states that “TALON’s multi-mission family of robots includes one specifically equipped for tactical scenarios frequently encountered by police SWAT units and MPs in all branches of the military. TALON SWAT/MP is a tactical robot that can be configured with a loudspeaker and audio receiver, night vision and thermal cameras and a choice of weapons for a lethal or less-than-lethal response”. Qinetiq, external pageTALON Robots – TALON SWAT/MP.

[xxix] The Law of War in conjunction with the laying of contact underwater mines is covered in Article I to the Hague Convention VIII; October 18, 1907. Article I states: “It is forbidden [] [t]o lay unanchored automatic contact mines, except when they are so constructed as to become harmless one hour at most after the person who laid them ceases to control them; [t]o lay anchored automatic contact mines which do not become harmless as soon as they have broken loose from their moorings; [and] [t]o use torpedoes which do not become harmless when they have missed their mark.” Convention (VIII) Relative to the Laying of Automatic Submarine Contact Mines, Oct. 18, 1907, 36 U.S.T. 541.

[xxx] Singer, supra note vi, at 81.

JavaScript has been disabled in your browser