America Can't Afford to Lose the Artificial Intelligence War

America Can't Afford to Lose the Artificial Intelligence War

The United States must rededicate itself to being the first in the field of AI.

 

Today, the question of artificial intelligence (AI) and its role in future warfare is becoming far more salient and dramatic than ever before. Rapid progress in driverless cars in the civilian economy has helped us all see what may become possible in the realm of conflict. All of a sudden, it seems, terminators are no longer the stuff of exotic and entertaining science-fiction movies, but a real possibility in the minds of some. Innovator Elon Musk warns that we need to start thinking about how to regulate AI before it destroys most human jobs and raises the risk of war.

It is good that we start to think this way. Policy schools need to start making AI a central part of their curriculums; ethicists and others need to debate the pros and cons of various hypothetical inventions before the hypothetical becomes real; military establishments need to develop innovation strategies that wrestle with the subject. However, we do not believe that AI can or should be stopped dead in its tracks now; for the next stage of progress, at least, the United States must rededicate itself to being the first in this field.

 

First, a bit of perspective. AI is of course not entirely new. Remotely piloted vehicles may not really qualify—after all, they are humanly, if remotely, piloted. But cruise missiles already fly to an aimpoint and detonate their warheads automatically. So would nuclear warheads on ballistic missiles, if God forbid nuclear-tipped ICBMs or SLBMs were ever launched in combat. Semi-autonomous systems are already in use on the battlefield, like the U.S. Navy Phalanx Close-In Weapons System, which is “capable of autonomously performing its own search, detect, evaluation, track, engage, and kill assessment functions,” according to the official Defense Department description, along with various other fire-and-forget missile systems.

But what is coming are technologies that can learn on the job—not simply follow prepared plans or detailed algorithms for detecting targets, but develop their own information and their own guidelines for action based on conditions they encounter that were not initially foreseeable in specific.

A case in point is what our colleague at Brookings, retired Gen. John Allen, calls “hyperwar.” He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the target’s defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word “robotics” seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.

The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a target—based on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictive—and fire upon that target without any human input.

To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States’ best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.

To be sure, we need the debate about AI’s longer-term future, and we need it now. But we also need the next generation of autonomous systems—and America has a strong interest in getting them first.

Michael O'Hanlon is a senior fellow at the Brookings Institution. Robert Karlen is a student at the University of Washington and an intern in the Center for Twenty-First Century Security and Intelligence at the Brookings Institution.

Image: Reuters