Sitting Out of the Artificial Intelligence Arms Race Is Not an Option

Sitting Out of the Artificial Intelligence Arms Race Is Not an Option

The race to build autonomous weapons will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century.

 

Viewing the dangerous advances in military technology, from Nazi V-weapons to hydrogen bombs, investigative journalist I.F. Stone once described arms races as the inevitable product of there being “no limit to the ingenuity of science and no limit to the deviltry of human beings.” This dark truth about the era of human-controlled “kinetic” weapons of mass destruction that so concerned Stone remains true today of the emerging range of increasingly automated systems that may now be fusing scientific ingenuity with a silicon-based deviltry of all its own.

For most of history, from stones to siege guns, warfare consisted of hurling some amount of mass with sufficient energy to do serious harm. The general trend has been toward increasing mass and energy, giving weapons greater range. Yet, until the first automated guidance systems came into play during World War II, the “information content” of weaponry was quite small, reducing accuracy. But what began with the first ballistic and cruise missiles in 1944 quickened in the following decades, to the point that some missiles had electronic brains of their own to guide them in flight, like the American Tomahawk that went into service in 1983. Even though it’s launched at human command, once underway its “brain” does all of the sensing and maneuvering, over whatever distance, with precision accuracy.

 

And this increasing information content of weapons isn’t just for long-range use. The stalwart Ukrainian defense that has hammered so hard at Russian tanks and helicopters has been greatly enhanced by smart, short-range anti-tank Javelins and anti-aircraft Stingers. Thus, the much heavier and more numerous invading forces have been given a very hard time by defenders whose weapons have brains of their own.

But this is just a small slice of the rising space into which automated systems are moving. Beyond long-range missile strikes and shorter-range battlefield tactics lies a wide variety of other military applications for artificial intelligence. At sea, for example, the Chinese have more than two dozen types of mines, some of which have significant autonomous capabilities for sensing the type of enemy vessel and then rising from the seafloor to attack it. Needless to say, U.S. Navy Ford-class carriers, costing $10 billion-plus per, can be mortally threatened by these small, smart, cheap weapons. As for the Russians, their advances in naval robotics have led to the creation of an autonomous “U-bot” that can dive deep and locate fiber-optic links, either tapping into or severing them. More than 95 percent of international communications move through the roughly 400 of these links that exist around the world. So, this bot, produced even in very small numbers, has great potential as a global “weapon of mass disruption.”

There are other ways in which silicon-based intelligence is being used to bring about the transformation of war in the twenty-first century. In cyberspace, with its botnets and spiders, everything from economy-draining “strategic crime” to broad infrastructure attacks is greatly empowered by increasingly intelligent autonomous systems. In outer space, the Chinese now have a robot smart enough to sidle up to a satellite and place a small explosive (less than 8 lbs.) in its exhaust nozzle—and when the shaped charge goes off, the guts of the satellite are blown without external debris. Mass disruption is coming to both the virtual and orbital realms.

The foregoing prompts the question of what the United States and its friends and allies are doing in response to these troubling advances in the use of artificial intelligence to create new military capabilities. The answer is as troubling as the question: “too little.” Back in 2018, then-Under Secretary of Defense for Research and Engineering Michael Griffin acknowledged that “There might be an artificial arms race, but we’re not in it yet.” There was a glimmer of hope that the Americans might be lacing up their running shoes and getting in the AI arms race when Eric Lander became President Joe Biden’s science advisor in January 2021, as he had publicly stated that “China is making breathtaking progress” in robotics and that the United States needed to get going. But Lander apparently didn’t play well with others and resigned in February 2022. Given that NATO and other friends tend to move in tandem with the Americans, all are too slow getting off the mark.

Beyond personnel issues, the United States and other liberal and free-market societies are having some trouble ramping up to compete in the robot arms race for three other reasons. The first is conceptual, with many in the military, political, and academic circles taking the view that advances in artificial intelligence do not fit classical notions and patterns of weapons-based arms races. It is hard to make the case for urgency, for the need to “race,” when there doesn’t even seem to be a race underway.

Next, at the structural level, the United States and other free-market-based societies tend to see most research in robotics undertaken by the private sector. The Pentagon currently spends about 1 percent of its budget (just a bit over $7 billion) on advancing artificial intelligence. And in the American private sector, much of the research in AI is focused on improving business practices and increasing consumer comfort. Whereas, in the case of China, about 85 percent of robotics research is state-funded and military-related. The Russians are following a kind of hybrid system, with the Putin government funding some 400 companies’ research in “strategic robotics.” As Putin has said in a number of his speeches, the leader in artificial intelligence “will become master of the world.” So, it seems that the structure of market societies is making it a bit harder to compete with authoritarians who can, with the stroke of a pen, set their countries’ directions in the robot arms race and provide all necessary funding.

The final impediment to getting wholeheartedly into the robot arms race is ethical. Throughout the free world, there is considerable concern about the idea of giving “kill decisions” in battle over to autonomous machines. Indeed, there is so much resistance to this possibility that a major initiative at the United Nations has sought to outlaw “lethal autonomous weapon systems” (LAWS). Civil society NGOs have supported this proposed ban and drawn celebrity adherents like Steve Wozniak and Elon Musk to the cause. Pope Francis has joined this movement, too.

One of the main concerns of all these objectors is about the possibility that robots will unwittingly kill innocent non-combatants. Of course, human soldiers have always caused civilian casualties, and still do. Given the human penchant for cognitive difficulties rising from fatigue, anger, desire for revenge, or just the “fog of war,” there is an interesting discussion that needs to be had about whether robotic warriors will be likely to cause more or possibly less collateral damage than human soldiers do.

So far, the United States, Britain, and a few other democracies have resisted adopting a ban on weaponized robotics; but the increasingly heated discourse about “killer robots” even in these lands has slowed their development and use. Needless to say, neither China nor Russia has shown even the very slightest hesitation about developing military robots, giving them the edge in this arms race.

 

It is clear that the ideal first expressed eighty years ago in the opening clause of Isaac Asimov’s First Law of Robotics, “A robot may not injure a human being,” is being widely disregarded in many places. And those who choose to live by the First Law, or whose organizational structures impede swift progress in military robotics, are doomed to fall fatally behind in an arms race now well underway. It is a race to build autonomous weapons that will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century. Simply put, sitting out this arms race is not an option.

John Arquilla is Distinguished Professor Emeritus at the United States Naval Postgraduate School and author, most recently, of Bitskrieg: The New Challenge of Cyberwarfare. The views expressed are his alone.

Image: Flickr.