Autonomous Weapon Systems: The Military's Smartest Toys?
"We are standing at the cusp of a momentous upheaval in the character of warfare, brought about by the large-scale infusion of robotics into the armed forces."
Difficulties are also raised when no overt windows of opportunity exist, but one or several actors begin to employ AWS in their crisis operations, be it in support of crisis management efforts or as part of their preparations for war. In doing so, the states in question would be introducing into the crisis equation an element that is beyond their immediate control, but that nonetheless interacts with the human opponent’s strategic psychology. In effect, the artificial intelligence (AI) that governs the behavior of autonomous systems during their operational employment would become an additional actor participating in the crisis, though one who is tightly constrained by a set of algorithms and mission objectives. This may raise doubts in the human participant’s mind as to whether these systems pose a danger not just as an instrument governed by the opponent’s intentions, but independently of them. (To be fair, one could say the same about a powerful military organization like the Cold War–era Strategic Air Command, which also acted according to a logic of its own, and sometimes without effective supervision from its political masters.)
Additionally, because no loss of human life is involved, the threshold for the use of force against AWS may be lower than it is against manned systems and the attacker may believe that he can get away with destroying them, thus triggering conflict in an act of miscalculated escalation. On the other hand, the fact that autonomous weapons aren’t fully under the control of a human agent can also be seen as introducing what Thomas Schelling called “a threat that leaves something to chance,” which could induce both sides to behave more responsibly for fear of losing control over a tense situation.
Dogs of War, Unchained
If a crisis does result in the use of force, escalation theory reminds us that much depends on how the initial stages of the conflict unfold. If escalation is immediate and dramatic, the crisis becomes irrecoverable and the conflict is unlikely to be contained. To avoid such an outcome, offensive weapons systems should be recallable and postured such that they need not be launched immediately. Moreover, at all stages of the conflict, they should be employed in ways that avoid inadvertent escalation, which occurs when conventional military actions unintentionally undermine the opponent’s nuclear deterrent. This requires accurate target discrimination, as well as a doctrine that avoids “patterns of damage or threat” to an actor’s strategic forces.
In all of these areas, AWS raise difficult questions. Recallability and loss of control, clearly, are major concerns. While strike systems along the lines of the X-47B UCAS could initially be employed under close human supervision, it is difficult to see how they could realize their full potential in those scenarios where they offer by far the greatest value added: intelligence, surveillance and reconnaissance (ISR) and strike missions deep inside well-defended territory, where communications will likely be degraded and the electronic emissions produced by keeping a human constantly in the loop could be a dead giveaway. For now, the Navy is skirting this and other bureaucratic-cultural issues by downgrading UCAS into a system that cannot really fulfill the role for which it was originally envisioned. The Air Force’s “optionally manned” long-range strike bomber (LRS-B) will face a similar dilemma in its uninhabited configuration.
While AWS would be inherently more recallable than ballistic missiles and, in fact, no less recallable than manned aircraft while they are in permissive airspace, the equation would change once they infiltrated denied zones. These systems would be among the first to cross into enemy territory, as few other assets would be survivable inside the envelope of a full-blown anti-access defense. Under an operational construct like the Joint Operational Access Concept, which currently represents the state of the art in access warfare and which envisions “striking enemy antiaccess/area-denial capabilities in depth,” the most survivable strike assets would have to penetrate deep into the defended zone and persist in enemy airspace for extended periods of time.
While executing their missions, they would be subjected to cyber and other nonkinetic attacks, and could be at least intermittently out of contact with their human supervisors over periods of twelve hours or more, depending on their unrefueled endurance. During these stints inside the defended zone, AWS might not be fully recallable or reprogrammable, even if the political situation changes, which presents a risk of undesirable escalation and could undermine political initiatives. (It should be noted that similar risks are routinely presented by submarine operations, with the potentially significant difference that these do not take place over the enemy’s home territory. The sinking of the Argentine cruiser ARA General Belgrano by HMS Conqueror during the 1982 Falklands War is a case in point.)
While they are scouting, disrupting and destroying key nodes in the enemy’s defenses, autonomous strike systems would also be presented with target discrimination challenges of some magnitude. In correctly identifying legitimate targets for attack, AWS would have to rely on some kind of pre-mission input to which to compare their sensor data, and on algorithms that ensure positive identification and limit collateral damage. Knowing that this is the case, opponents would have every incentive to complicate the targeting process by employing cover, concealment and especially deception. This could include, inter alia, relocating important assets to busy urban settings or next to inadmissible targets, such as hydroelectric dams or nuclear-power stations; altering the appearance of weapons and installations to simulate illegitimate targets, and perhaps even the alteration of illegitimate targets to simulate legitimate ones; large-scale use of dummies and obscurants, and the full panoply of electronic deception measures. Even in the absence of such measures, discrimination can be a daunting task; to give just one example, the Chinese DF-21 medium-range ballistic missile exists in nuclear, conventional land-attack and anti-ship versions—all of which are deployed within the same organizational framework. In such cases, only reliable and up-to-date order-of-battle analysis might be able to ensure discrimination, and even then, fateful mistakes and threatening “patterns of damage” that lead to inadvertent escalation will remain a possibility.
Even If the Skies Fall Not: A Realist Case for Caution
The political decision for or against autonomous will ultimately turn not on legal or moral issues, but on the answer to a very practical question: Are the advantages of removing human supervision at the point of attack so overriding that they justify taking the resultant risks? As is usually the case with novel and powerful military instruments, AWS promise to provide those who embrace them with a set of capabilities that, from a narrowly military-operational point of view, are not to be frowned upon. As is usually the case, these advantages will come at a strategic cost. In the case of AWS, this is likely to include serious modernization pressures, which could prove destabilizing in some instances. While they need not provide any clear-cut first-move advantages in a crisis, it is also likely that they will touch upon issues of crisis management, perhaps in ways that are not well understood at present. Finally, there are scenarios in which the introduction of autonomous strike systems could result in temporary loss of high-level control over operations, and unwanted escalation (conventional or nuclear).
None of these dangers are new or unique to AWS, and they will probably be present in future strategic rivalries, crises and conflicts, even if fully-autonomous weapons and platforms never leave the drawing board. Moreover, advocates may argue that these systems can even increase stability by ensuring access, strengthening deterrence and reducing critical vulnerabilities.
That said, policy makers should not let their hands be forced by these advocates, or by the “righteous indignation” of activists in search of the next class of weapons to ban. They should exercise prudence and caution in weighing the implications of autonomous weapons for military stability against the potential benefits of introducing these systems into an equation that is highly complex as it stands. This requires that, especially where nuclear weapons come into play, the burden of proof lie with the proponents of AWS, not with their critics. It also requires that mutual restraint be explored as a serious option. At the very least, it would appear that the strategic risks presented by these systems need to be studied much more thoroughly before current demonstrator programs are allowed to mature into operational systems. Those who are driving the competition would do well also to invest in the knowledge base that is required to understand the full implications of an AWS revolution—or, indeed, to avert it, if it is found to be an unattractive prospect after all.
Michael Carl Haas is a researcher with the Global Security team at the Center for Security Studies, ETH Zurich. His areas of interests include air and missile power, military innovation, and the proliferation of advanced conventional weapons.
Image: Flickr/The U.S. Army/CC by 2.0