International Governance of Autonomous Military Robots (Page 3)

12 Dec 2012

The appearance of lethal and yet autonomous robots will have a significant impact on the use of hard power. As a result, multinational dialogue on how to govern the use of such systems should occur before they are widely applied, or so argues the Autonomous Robotics thrust group.

(Continued from Page 2)

Complexity and Unpredictability

Unfortunately, a full awareness of the risks from autonomous robots may be impossible. Wallach and Allen discuss how predicting the relevant dangers can be fraught with uncertainty.[xxxi]

For example, a semi-autonomous anti-aircraft gun accidentally killed several South African soldiers.[xxxii] Roger Clarke pointed out years ago that, “[c]omplex systems are prone to component failures and malfunctions, and to intermodule inconsistencies and misunderstandings.”[xxxiii] Blay Whitby echoes the notion by arguing that computer programs often do not behave as predictably as software programmers would hope[xxxiv]. Experts from computing, robotics, and other relevant communities need to continue weighing in on the matter so the reliability of LARs can be more thoroughly assessed.

Perhaps robot ethics has not received the attention it needs, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways. Even straightforward, simple rules such as Asimov’s Laws of Robotics can create unexpected dilemmas[xxxv]. Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity[xxxvi].

Related major research efforts also are being devoted to enabling robots to learn from experience. Learning may enable the robot to respond to novel situations, an apparent blessing given the impracticality and impossibility of predicting all eventualities on the designer’s part. But this capability raises the question of whether it can be predicted with reasonable certainty what the robot will learn. Arguably, if a robot’s behavior could be adequately predicted, the robot would just be programmed to behave in certain ways in the first place instead of requiring learning. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully-structured domain of a factory or test laboratory.

Just-War Theory

An overarching concern is whether the use of LARs is consistent with time-honored principles, rules, and codes that guide military operations, including just-war theory, the Law of Armed Conflict (“LOAC”), and the Rules of Engagement. Scholars are already starting to analyze whether LARs will be capable of fulfilling the requirements of just-war theory. Noel Sharkey, for example, doubts that robots will be capable of upholding the principle of discrimination, which involves being able to distinguish between combatants and non-legitimate targets such as civilians and surrendering soldiers.[xxxvii]

While discussing military robots and the principle of proportionality, Sparrow argues that “decisions about what constitutes a level of force proportionate to the threat posed by enemy forces are extremely complex and context dependent and it is seemingly unlikely that machines will be able to make these decisions reliably for the foreseeable future.”[xxxviii] However, at this point, it remains an open question whether the differences between LARs and existing military technology are significant enough to bar the former’s use.

Further, Asaro examines whether military robots may actually encourage wars by altering pre-conflict proportionality calculations and last resort efforts.[xxxix] A fundamental impediment to war is the loss of human life, especially the lives of fellow citizens; casualties are a significant reason why wars are not more common. Sending an army of machines to war—rather than friends and relatives—may not exact the same physical and emotional toll on a population.[xl] Assuming the existence of a just cause, one could celebrate this new calculus, which more readily permits legitimate self-defense. However, this reduced cost may, in turn, reduce the rigor with which non-violent alternatives are pursued and thus encourage unnecessary—and therefore unjust—wars. While this possible moral hazard obviously does not require us to maximize war costs, it does require efforts to inform and monitor national security decision-makers.

Finally, Singer suggests that these LARs weapons could undermine counterinsurgency efforts, where indigenous respect and trust is crucial to creating a reasonable chance of success.[xli] Unmanned weapons may be perceived as indicative of flawed characters and/or tepid commitments, and are incapable of developing necessary personal relationships with local citizens. And even if remote controlled or autonomous weapons are more discriminate than soldiers, they are commonly perceived as less so.

Civilian Applications

Technology developed for military purposes frequently has civilian applications and vice versa. For instance, Singer notes how the REMUS, the Remote Environmental

Monitoring Unit, was originally used by oceanographers but later an altered version of it was deployed in Iraq.[xlii] Further, many federal and state agencies have sought permission to use military technology such as UAVs.[xliii] Consequently, once LARs are developed for military use, what might the implications be down the road for their use in civilian contexts?[xliv]

Broader Ethical and Social Considerations

Any decisions and policies regarding military development and use of LARs will impact, and be impacted by, broader technological and social considerations[xlv].

The failure to acknowledge these considerations upfront, and include them in the analysis, can lead to dysfunctional results, and unnecessary failure of legal and policy initiatives. Accordingly, we will highlight some relevant considerations, and their potential implications.

Consider first what an LAR actually is: one of many potential functions platformed on a generic technology base, which itself may be highly variable. Thus, a “lethal” function, such as a repeating kinetic weapon mounted on a robotic system, is one that is intentionally programmed to identify, verify, and eliminate a human target. At other times, the same basic robotic system might be fitted for cargo carrying capacity, for surveillance, or for many other functions, either in a military or a civil capacity. “Autonomous” means that the platform is capable of making the necessary decisions on its own, without intervention from a human. This, again, may involve the lethality function, or it may not: one might, for example, tell a cargo robot to find the best way to a particular destination, and simply let it go[xlvi].

Similarly, “robot” may sound obvious, but it is not. Tracked machines such as the Talon or PackBot, or UAVs such as the Predator or Raven, are fairly obvious robotic technologies, but what about “smart” land mines that are inert until they sense the proper

triggering conditions and are capable of jumping, or exploding, or doing whatever else they are built to do?[xlvii] What about bombs that, once deployed, glide above the battlefield until finding an enemy target, which are then attacked while sparing cars, buses, and homes?[xlviii] And what about a grid of surveillance/attack cybersects (insect size robots or cyborgs consisting of biological insects with robotic functions integrated into them)? Each cybersect taken alone may be too insignificant and dumb to be considered a robot, but the cybersect grid as a whole may actually be quite intelligent.[xlix] Going one step further, what should we call a weapons platform that is wirelessly connected directly into a remote human brain (in recent experiments, a monkey at Duke University with a chip implanted in its brain that wirelessly connected it to a robot in Japan kept the Japanese robot running by thought, so that the robot was in essence an extension of its own physicality)?[l] Even now, the Aegis computer fire control system deployed on Navy vessels comes with four settings: “semiautomatic”, where humans retain control over the firing decision; “automatic special,” where humans set the priorities but Aegis determines how to carry the priorities out; “automatic,” where humans are kept in the loop but the system works without their input; and “casualty” where the system does what it thinks necessary to save the ship.[li]

This brief digression raises serious questions about what an LAR actually is. Certainly, it has a technology component, but in some ways this is almost trivial compared to its social, political, ethical, and cultural dimensions. In fact, one might well argue that in many important ways a LAR is more of a cultural construct than a technology, with its meaningful dimension being intent rather than the technology system.[lii]

This is an important point, for it suggests that any legal or regulatory approach that focuses on technology may be misplaced; conversely, it means that the underlying technologies which come together in an LAR will continue to evolve independent or irrespective of any direct controls on LARs – including the functionality of the physical hardware, the sophistication of the software, and the integrated technological capability we call “autonomy.”

It is because the technologies are separate from the use that the discussion of LARs is frequently confused: LARs are often discussed as if they were a “military technology,” when in fact they are a set of technologies that can be integrated in ways that are effective and desirable given current military conditions. Let us begin by identifying two levels at which technologies function: Level I, or the shop floor level; and Level II, or the institutional and social system level.[liii]

Thus, for example, if one gives a vaccine to a child to prevent her from getting a particular disease, one is dealing with a Level I technology system: the desired goal, no disease, is inherent in use of the technology. On the other hand, if one starts a vaccine program in a developing country in order to encourage economic growth because of better health, it is a Level II system: use of vaccines may contribute to such a goal, but there are many intervening systems, pressures, policies, and institutions.

To return to LARs, then, one might begin by asking why deploy such a technology in any form? Here, one has serious coupling between Level I and Level II issues. The immediate Level I response is that deployment of LARs would save soldiers’ lives on the side that deployed them; many explosive devices in Iraq and Afghanistan that might otherwise have killed and maimed soldiers have been identified, and eliminated, by robots. But this is in some ways only begging a serious Level II question. In World War I, for example, generals thought little of killing 100,000 men at a go by sending them into the teeth of concentrated machine gun fire. Consequently, simply avoiding casualties is an inadequate explanation. That world, however, has changed, especially for the U.S. military, which faces a particularly stark dilemma. It is charged by its citizens with being able to project force anywhere around the world, under virtually any conditions. But, for a number of reasons, American civilians have become increasingly averse to any casualties. So the U. S. military finds itself in the dilemma of being required to project its power without American soldiers dying. Additionally, the long-term demographics are not good: Americans, like other developed countries, are looking at an aging demographic, with the immediate implication that there are fewer young people to fill boots on the ground.[liv]

The institutional and social context of military operations for the United States is increasingly one where better military productivity becomes paramount, with productivity measured as mission accomplishment per soldier lost. And robots can potentially contribute significantly to achieving such productivity. It’s not just about saving soldier’s lives, a Level I technology. It’s also about building the capability to continue to project power with fewer casualties, and to do so because culture and society are changing to make fatalities, whether soldier or civilian, less acceptable, which are Level II trends.

In sum, LARs raise a broad range of complex ethical and social issues, which we have only begun to address here. Suffice it to say, though, that any attempt to regulate or govern such technology systems must address these issues in addition to the more concrete technological and legal issues. The various models available to attempt this task are discussed in the following sections.


[xxxi] Wendell Wallach & Colin Allen, external pageMoral Machines: Teaching Robots Right from Wrong 189-214 (2009).

[xxxii] Noah Shachtman, external pageRobot Cannon Kills 9, Wounds 14, Wired.com, Oct. 18, 2007.

[xxxiii] Roger Clarke, external pageAsimov's Laws of Robotics: Implications for Information Technology-Part II, 27 Computer 57, 65 (1994).

[xxxiv] Blay Whitby, external pageComputing Machinery and Morality, 22 AI & Society 551, 551-563 (2008).

[xxxv] Issac Asimov, I, Robot (1950).

[xxxvi] See, e.g., Ray Kurzweil. external pageThe Age of Spiritual Machines: When Computers Exceed Human Intelligence (1999); Ray Kurzweil. The Singularity is Near: When Humans Transcend Biology (2005).

[xxxvii] Noel Sharkey, external pageCassandra or False Prophet of Doom: AI Robots and War, 23(4) IEEE Intelligent Systems 14, 16-17 (2008); Noel Sharkey, external pageThe Ethical Frontiers of Robotics, 32 Science 1800, 1800-01 (2008) (“[N]o computational system can discriminate between combatants and innocents in a close-contact encounter”).

[xxxviii] Sparrow, Building a Better WarBot, supra note viii, at 178.

[xxxix] Peter Asaro, external pageHow Just Could a Robot War Be?, in Adam Briggle, Katinka Waelbers, and Philip Brey (eds.), Current Issues in Computing and Philosophy 1, 7-9 (2008).

[xl] Robert Sparrow, external pagePredators or Plowshares? Arms Control of Robotic Weapons, IEEE Tech. Soc’y Magazine, Spring 2009, at 25, 26 (hereinafter “Sparrow, Predators or Plowshares?”).

[xli] Singer, supra note vi, at 299.

[xlii] Id. at 37-38.

[xliii] Anne Broache, external pagePolice Agencies Push for Drone Sky Patrols, CNET News, Aug. 9, 2007.

[xliv] See generally Gary Marchant and Lyn Gulley, external pageNational Security Neuroscience and the Reverse Dual-Use Dilemma, 1 Am. J. Bioethics Neuroscience 20, 20-22 (2010).

[xlv] This consideration led the Lincoln Center for Applied Ethics at Arizona State University, the Inamori International Center for Ethics and Excellence at Case Western Reserve University, and the U.S. Naval Academy Stockdale Center for Ethical Leadership to found the Consortium for Emerging Technology, Military Operations, and National Security, or CETMONS. See external pageCETMONS. See generally also Max Boot, external pageWar Made New (2006); John Keegan, external pageA History of Warfare (1993).

[xlvi] See, e.g., external pageDARPA, detailing the progress in autonomous vehicles as a result of DARPA’s Grand Challenge initiative.

[xlvii] See, e.g., external pageDynamic Networking and Smart Sensing Enable Next-Generation Landmines.

[xlviii] See, e.g., external pageThe Calibration of Destruction, The Economist, Jan. 30, 2010, at 87, 87-88.

[xlix] Gary Kitchener, external pagePentagon Plans Cyber-Insect Army, BBC News, Mar. 16, 2006.

[l] See, e.g., K. C. Jones, external pageMonkey Brains in U. S. Make Robot Walk in Japan, InformationWeek, Jan. 16, 2008.

[li] See, e.g., Singer, supra note vi, at 124-25.

[lii] Similarly, commercial jets were understood to be transportation technologies until reconceptualized by Al-Qaeda terrorists into a weapon. Many individuals, not just engineers, underestimate the social and cultural dimensions of technology. See, e.g.,Wiebe Bijker, Thomas P. Hughes & Trevor Pinch (eds.), external pageThe Social Construction of Technological Systems (1997).

[liii] The analysis of technology systems also includes Level III, the earth systems level; see Braden R. Allenby, external pageEarth Systems Engineering and Management: A Manifesto, 41 Envtl. Sci. & Tech. 7960, 7960-66 (2007).

[liv] See Noel Shachtman, external pageArmy Looks to Keep Troops Forever Young, Wired Danger Room, Apr. 24, 2009.

JavaScript has been disabled in your browser