Autonomous Weapons: The Ultimate Military Game Changer?

Flickr / US Air Force
October 21, 2018 Topic: Security Region: Americas Blog Brand: The Buzz Tags: MilitaryTechnologyWarWeapons3D Printing

Autonomous Weapons: The Ultimate Military Game Changer?

Know this: if autonomous weapons are developed and introduced into the world’s arsenals, then they are unlikely to immediately revolutionize warfare.

 

The widespread use of unmanned systems by the United States and other militaries has by now become commonplace. These systems offer a number of advantages over their manned counterparts: they have longer range and endurance, they are stealthier, and they can be operated without putting military personnel in harm’s way. While strategists in the armed forces are still trying to determine how best to incorporate robotics into future concepts of warfare, there is little doubt that it will prove to be an enduring feature of human conflict. In recent years, international-security experts have begun to turn their attention toward the possibility of unmanned weapon systems that will be largely or entirely autonomous.

There is no consensus yet as to what constitutes an autonomous weapon, a fact that has hindered discussion on this topic. The official definition adopted by the Pentagon refers to “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.” Very few weapons in service today fall under this category. While armed drones have proliferated, virtually all of them require direct human involvement in order to carry out their mission. However, recent advances in artificial intelligence (AI) have raised the possibility that self-directed weapons of varying complexity could emerge and eventually play a significant role in how future wars are waged.

 

There are three main advantages to be gained by employing autonomous weapons. The first is speed. In this context, it is appropriate to consider the concept of the OODA loop, a four-step decisionmaking process in which an individual observes, orients, decides, and then acts. In combat, there is a premium on completing one’s own decision loop as quickly as possible while at the same time disrupting, or at least delaying, that of one’s opponent. This is known as getting “inside” an enemy’s OODA cycle.

Modern technology permits the collection of tremendous amounts of data, but humans generally lack the ability to process and interpret it quickly enough to be militarily useful. As a result, military operators are increasingly relying on advanced software to provide them with information about the battle space that is both accurate and relevant. As research into AI continues to advance, computerized combat systems are likely to be able to quickly analyze a situation and then provide a recommended course of action to a military commander. If so, the human element would represent the slowest part of the decision loop, and it would be just a small step from there to eliminating direct human involvement altogether and replacing it with a system that is fully autonomous.

Another potential advantage offered by autonomous weapons lies in the realms of electronic and cybersecurity. Current unmanned systems, used primarily against technologically unsophisticated insurgents, require almost constant communication between themselves and their human operators. In a future conflict between states, however, electronic environments are likely to be highly contested. A remote-controlled weapon whose communications are severed by enemy jamming or cyber attacks would be useless. A sufficiently autonomous system, on the other hand, would be able to execute its mission even if data links are compromised.

Recommended: What Will the Sixth-Generation Jet Fighter Look Like?

Recommended: Imagine a U.S. Air Force That Never Built the B-52 Bomber

Recommended: Russia's Next Big Military Sale - To Mexico?

Lastly, autonomous weapons and platforms have the potential to dramatically reduce military personnel costs. This is probably the least sexy aspect of this topic, but it may be the most crucial. In order to maintain a modern, professional military a nation must be able to recruit, train, and retain a large pool of highly skilled manpower. Doing so, however, is becoming increasingly expensive. In the U.S. armed forces, personnel costs rose by 46 percent between 2000 and 2014 even though the size of the American military remained essentially unchanged over this period. Unlike human beings, autonomous systems do not need to be trained, fed, housed, or paid, nor do they require medical care or retirement pay. If introduced on a large scale, they have the potential to greatly reduce manpower requirements and the associated costs.

Despite these incentives to develop and deploy autonomous weapons systems, political and military leaders may be reluctant to do so. A fundamental concern lies in the fact that autonomy deprives military commanders of operational control. The greater the weapon system’s autonomy, the less human control there is.

Advanced artificial intelligence utilizes neural networks that enable the system to be self-learning and self-reasoning based on large quantities of data. In the civilian world, AI of this nature has demonstrated significant capabilities in recent years in terms of speech and image recognition, but it also sometimes produces deeply erroneous conclusions. Troublingly, because of the complexity of such deep-learning systems, computer programmers and engineers are often unable to accurately predict how the system will act in unforeseen circumstances or even explain its decisions after the fact. Pairing this kind of advanced AI with lethal weaponry would introduce the risk of spectacular systems failure that could have a far-reaching impact. If something goes wrong, then the consequences could include large-scale fratricide, civilian casualties, or unintended escalation during a crisis.

 

Current U.S. policy on this matter states that “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The appropriate level of human judgment is at this point unclear, but the policy goes on to expressly prohibit the employment of semi-autonomous weapons capable of selecting targets on their own if communications with human operators are severed.

Still, the Pentagon may eventually conclude that forgoing autonomous weapons systems will leave the United States at a significant military disadvantage in the future, particularly if potential adversaries do not share U.S. reluctance to field them. It is worth noting that the current U.S. military arsenal already includes several weapons systems that are merely automated or that feature only limited autonomy, most of which are defensive in nature.

There is no consensus as to when the technology needed for fully autonomous weapons systems will become available. Air Force Gen. Paul Selva, the vice chair of the Joint Chiefs of Staff, has said that he thinks it is still about a decade away. Some observers believe that it will arrive sooner than that; others contend it will never be feasible. In the United States, most of the research into artificial intelligence is occurring in the commercial sector rather than defense industries. In the future—perhaps the relatively near future—advanced AI will likely find many applications in the civilian world, from self-driving cars to medical robots that can perform surgery better than human doctors. Such technology could easily be adapted for military use. Once it becomes commercially available, it will be difficult to keep it out of the hands of malevolent actors.

If autonomous weapons are developed and introduced into the world’s arsenals, they are unlikely to immediately revolutionize warfare. If past military innovations are any guide, autonomous weapons will probably be fielded in small numbers at first. After all, early models may be expensive and because there will be uncertainty about how best to utilize their capabilities while minimizing potential risks. It will also take time for militaries to develop operational concepts that incorporate autonomous weapons into a broader framework of how to wage war. The side that fields them first may not necessarily be the side that figures out how best to exploit their potential.

All of that being said, autonomous weapons may well prove to be a game changing military innovation on par with the inventions of the machine gun and radar. They may not be the only one. The continued development of enabling AI technology is likely to occur alongside advances in other areas such as nanotechnology and miniaturization, robotics, hypersonic propulsion, and 3D printing, each of which has potential military applications. The impact these technologies will have on the way wars are fought is uncertain, but there is a growing consensus among international-security experts that the coming decades will see transformational changes in the nature of modern warfare. Autonomous systems will almost certainly play a major role in that process.

Richard Purcell is an independent national-security analyst and freelance writer in Washington, DC. He holds a master’s degree from the Johns Hopkins School of Advanced International Studies and previously worked as a legislative staffer for Senator Richard Durbin (D-IL). You can follow him on Twitter at @SecurityDilems.

Image: Flickr / US Air Force