International Governance of Autonomous Military Robots (Page 1)

12 Dec 2012

The appearance of lethal and yet autonomous robots will have a significant impact on the use of hard power. As a result, multinational dialogue on how to govern the use of such systems should occur before they are widely applied, or so argues the Autonomous Robotics thrust group.

Editor’s note: Developing and applying new technologies has been a critical component of modern military strategy and preparedness. However, new technologies inevitably require operational, political, legal and ethical adjustments. One of the biggest ones is deciding how to govern their use, as members of the Consortium on Emerging Technologies, Military Operations, and National Security (CETMONS) discuss.

Introduction

Military technology is a field driven by change – the constant pursuit to be better, faster, stronger. Certain technological achievements like guns and planes have happened in the purview of the public and have revolutionized the world of war as we know it. Yet many technological changes have occurred under the radar, in military labs and private test fields, with the majority of citizens unaware of the leaps and bounds of progress. Robotics is one such modern military technology advancement that has largely escaped public attention to date. Combining the most advanced electronic, computer, surveillance, and weapons technologies, the robots of today have extraordinary capabilities and are quickly changing the landscape of battle and dynamics of war. One of the most important achievements has been the creation of robots with autonomous decision-making capability[i]. In particular, the development of autonomous robots capable of exerting lethal force, known as lethal autonomous robots (“LARs”), has significant implications for the military and society.

A variety of never-before-anticipated, complex legal, ethical, and political issues have been created – issues in need of prompt attention and action. There have recently been growing calls for the potential risks and impacts of LARs to be considered and addressed in an anticipatory and preemptive manner. For example, in October 2010, a United Nations human-rights investigator recommended in a report to the United Nations that “[t]he international community urgently needs to address the legal, political, ethical and moral implications of the development of lethal robotic technologies[ii].” In September 2010, a workshop of experts on unmanned military systems held in Berlin issued a statement (supported by a majority but not all of the participants) calling upon “the international community to commence a discussion about the pressing dangers that these systems pose to peace and international security and to civilians[iii].” While there is much room for debate about what substantive policies and restrictions (if any) should apply to LARs, there is broad agreement that now is the time to discuss those issues. The recent controversy over unmanned aerial vehicles (“UAVs”) that are nevertheless human-controlled (often referred to as “drones”) demonstrates the importance of anticipating and trying to address in a proactive manner the concerns about the next generation of such weapons – autonomous, lethal robotics[iv].

This article seeks to provide a background of some of these issues and start the much needed legal and ethical dialogue related to the use of lethal autonomous robotic technologies in the military context. The next part (Part II) of this article provides a brief history and illustrations of autonomous robots in the military, including the pending development of LARs. Part III sets forth a number of important ethical and policy considerations regarding the use of robots in military endeavors. Part IV reviews the current patchwork of guidelines and policies that apply to the use of military robots. Part V considers the role that international treaties and agreements might play in the governance of LARs, while Part VI investigates the potential role of soft-law governance mechanisms such as codes of conduct.

Background on Autonomous Military Robotics

In the United States there has been a long tradition of applying innovative technology in the battlefield, which has often translated into military success[v].The Department of Defense (“DOD”) naturally extended this approach to robotics. Primary motivators for the use of intelligent robotic or unmanned systems in the battlefield include:

Force multiplication – with robots, fewer soldiers are needed for a given mission, and an individual soldier can now do the job of what took many before.

Expanding the battle-space – robots allow combat to be conducted over larger areas than was previously possible.

Extending the warfighter’s reach – robotics enable an individual soldier to act deeper into the battle-space by, for example, seeing farther or striking farther.

Casualty reduction – robots permit removing soldiers from the most dangerous and life-threatening missions.

The initial generation of military robots generally operate under direct human control, such as the “drone” unmanned aerial vehicles being used by the U.S. military for unmanned air attacks in Pakistan, Afghanistan, and other theaters[vi]. However, as robotics technology continues to advance, a number of factors are pushing many robotic military systems toward increased autonomy. One factor is that as robotic systems perform a larger and more central role in military operations, there is a need to have them to continue to function just as a human soldier would, if communication channels are disrupted. In addition, as the complexity and speed of these systems increase, it will be increasingly limiting and problematic for performance levels to have to interject relatively slow human decision-making into the process. As one commentator recently put it, “military systems (including weapons) now on the horizon will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.”[vii]

Based on these trends, many experts believe that autonomous, and in particular lethal autonomous, robots are an inevitable and relatively imminent development[viii].

9 Indeed, several military robotic-automation systems already operate at the level where the human is still in charge and responsible for the deployment of lethal force, but not in a directly supervisory manner. Examples include: (i) the Phalanx system for Aegis-class cruisers in the Navy “capable of autonomously performing its own search, detect, evaluation, track, engage and kill assessment functions[ix]”; (ii) the MK-60 encapsulated torpedo (CAPTOR) sea mine system – one of the Navy’s primary anti-submarine weapons capable of autonomously firing a torpedo and cruise missiles; (iii) the Patriot anti-aircraft missile batteries; (iv) “fire and forget” missile systems generally; and (v) anti-personnel mines or alternatively other, more discriminating classes of mines (e.g., anti-tank)[x]. These devices can each be considered to be robotic by some definitions, as they all are capable of sensing their environment and actuating, in these cases through the application of lethal force.

In 2001, Congress issued a mandate that stated that by 2010 one-third of all U.S. deep-strike aircraft should be unmanned and by 2015 one-third of all ground vehicles should be likewise unmanned[xi]. More recently, the United States Department of Defense (“DOD”) issued in December 2007 an Unmanned Systems Roadmap spanning twenty-five years, reaching until 2032, that likewise anticipated and projected a major shift toward greater reliance on unmanned vehicles in U.S. military operations[xii].

As early as the end of World War I, the precursors of autonomous unmanned weapons appeared in a project on unpiloted aircraft conducted by the U.S. Navy and the Sperry Gyroscope Company[xiii]. Multiple unmanned robotic systems are already being developed or are in use that employ lethal force such as the ARV (Armed Robotic Vehicle), a component of the Future Combat System (“FCS”); Predator UAVs (unmanned aerial vehicles) equipped with hellfire missiles, which have already been used in combat but under direct human supervision; and the development of an armed platform for use in the Korean Demilitarized Zone, to name a few[xiv].

The TALON SWORDS platform developed by Foster-Miller/QinitiQ has already been put to test in Iraq and Afghanistan and is capable of carrying lethal weaponry (M240 or M249 machine guns, or a Barrett .50 Caliber rifle). Three of these platforms have already served for over a year in Iraq and as of April 2008 and were still in the field, contrary to some unfounded rumors[xv].

A newer version, referred to as MAARS (Modular Advanced Armed Robotic System), is ready to replace the earlier SWORDS platforms in the field. The newer robot can carry a 40mm grenade launcher or an M240B machine gun in addition to various non-lethal weapons. The President of QinitiQ stated the purpose of the robot is to “enhance the warfighter’s capability and lethality, extend his situational awareness and provide all these capabilities across the spectrum of combat.”[xvi]

It is interesting to note that soldiers have already surrendered to UAVs even when the aircraft has been unarmed. The first documented instance of this occurred during the 1991 Gulf War. An RQ-2A Pioneer UAV, used for battle damage assessment for shelling originating from the U.S.S. Wisconsin, was flying toward Faylaka Island, when several Iraqis hoisted makeshift white flags to surrender, thus avoiding another shelling from the battleship.[xvii] Anecdotally, most UAV units during this conflict experienced variations of attempts to surrender to the Pioneer. A logical assumption is that this trend will only increase as UAVs’ direct-response ability and firepower increase.

The development of autonomous, lethal robotics raises questions regarding if and how these systems can conform as well or better than our soldiers with respect to adherence to the existing Laws of War. This is no simple task however. In the fog of war it is hard enough for a human to be able to effectively discriminate whether or not a target is legitimate. Fortunately for a variety of reasons, it may be anticipated, despite the current state of the art, that in the future autonomous robots may be able to perform better than humans under these conditions, for the following reasons[xviii]:

- The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. Autonomous, armed robotic vehicles do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer.

- The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans currently possess.

- They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. In addition, “[f]ear and hysteria are always latent in combat, often real, and they press us toward fearful measures.”[xix] Autonomous agents need not suffer similarly.

- Avoidance of the human, psychological problem of “scenario fulfillment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the U.S.S. Vincennes in 1988.[xx] This phenomenon leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming information in ways that only fit their preexisting belief patterns, a form of premature cognitive closure. Robots can be developed so that they are not vulnerable to such patterns of behavior.

- Robots can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real time. These data can arise from multiple remote sensors and intelligence (including human) sources, as part of the Army’s network-centric warfare concept and the concurrent development of the Global Information Grid.[xxi] “[M]ilitary systems (including weapons) now on the horizon will be too fast, too small, too numerous and will create environments too complex for humans to direct.”[xxii]

- When working in a team of combined human soldiers and autonomous systems as an organic asset, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.

The trend is clear: Warfare will continue and autonomous robots will ultimately be deployed in the conduct of warfare. The ethical and policy implications of this imminent development are discussed next, followed by a discussion of governance options.

(Continue reading)

[i] See generally Ronald C. Arkin, external pageGoverning Lethal Behavior in Autonomous Robots (2009).

[ii] Patrick Worsnip, external pageU.N. Official Calls for Study of Ethics, Legality of Unmanned Weapons, Wash. Post, Oct. 24, 2010.

[iii] The external pageStatement of the 2010 Expert Workshop on Limiting Armed Tele-Operated and Autonomous Systems, Berlin, Sept. 22, 2010.

[iv] P.W. Singer, external pageMilitary Robots and the Laws of War, New Atlantis, Winter 2009, at 25, 43.

[v] Material from this section is derived with permission from Arkin, supra note i.

[vi] See generally Peter W. Singer, Wired for War (2009); Peter Bergen & Katherine Tiedemann, external pageRevenge of the Drones: An Analysis of Drone Strikes in Pakistan, New America Foundation, Oct. 19, 2009, available at http://www.newamerica.net/ publications/policy/revenge_of_the_drones (last visited Nov. 14, 2010).

[vii] Thomas K. Adams, Future Warfare and the Decline of Human Decisionmaking, Parameters, U.S. Army War College Quarterly, Winter 2001-02, at 57-58.

[viii] Arkin, supra note i, at 7-10; See generally George Bekey, external pageAutonomous Robots: From Biological Inspiration to Implementation and Control (2005); Robert Sparrow, Building a Better WarBot: Ethical Issues in the Design of Unmanned Systems for Military Applications, 15 Sci. Eng. Ethics 169, 173-74 (2009) [hereinafter Sparrow, Building a Better Warbot].

[ix] US Navy, “Phalanx Close-in Weapons Systems,” external pageUnited States Navy Factfile.

[x] Antipersonnel mines have been banned by the Ottawa Treaty on antipersonnel mines, although the U.S., China, Russia, and thirty-four other nations are currently not party to that agreement. Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction (Ottawa Treaty), Sept. 18, 1997, 2056 U.N.T.S. 211. Recent developments, however, indicate that the U.S. is evaluating whether to be a part of the Ottawa Treaty. See Mark Landler, external pageWhite House Is Being Pressed to Reverse Course and Join Land Mine Ban, N.Y. Times, May 7, 2010, at A9.

[xi] Adams, supra note vii, at 57-58.

[xii] U.S. Department of Defense, external pageDOD Unmanned Systems Roadmap: 2007-2032 (2007).

[xiii] Adams, supra note vii, at 57.

[xiv] See Arkin, supra note i, at 10.

[xv] Foster-Miller Inc., Products & Service: external pageTALON Military Robots, EOD, SWORDS, and Hazmat Robots (2008).

[xvi] QinetiQ, Press Release: external pageQinitiQ North America Ships First MAARS Robot, June 5, 2008.

[xvii] Rebecca Maksel, external pagePredators and Dragons, Air & Space Magazine, July 1, 2008.

[xviii] Arkin, supra note i, at 29-30.

[xix] Michael Walzer, external pageJust and Unjust Wars 251 (4th ed., 1977).

[xx] Scott D. Sagan, Rules of Engagement, in external pageAvoiding War: Problems of Crisis Management 443, 459-61 (Alexander L. George ed., 1991)

[xxi] DARPA (Defense Advanced Research Projects Agency), Broad Agency Announcement 07-52, external pageScalable Network Monitoring, Strategic Technology Office, Aug. 2007.

[xxii] Adams, supra note vii, at 58.

JavaScript has been disabled in your browser