Will Artificial Intelligence Lead to War?

M1 Abrams Tank

Will Artificial Intelligence Lead to War?

The impact of generative AI on Asian deterrence is not well understood and may create greater risks of conflict.

 

A deterrence strategy depends on adversaries’ perceptions of both capabilities and intentions. Today, large language models and other fast-evolving forms of generative AI could change those perceptions in ways we can scarcely anticipate. Machine learning is becoming routine in predictive maintenance, logistics, personnel systems, route calculation, and even weapon targeting. However, the impact of generative AI on strategic thinking could have a much broader effect on global stability.

Consider the Indo-Pacific region. American defense officials are fixated on preserving deterrence there, striving to ensure that revisionist powers like China and North Korea understand that U.S. security guarantees are “ironclad,” and that aggression would lead to dire costs. But our words and even our signaling actions could be discounted if defense planners in Beijing or Pyongyang believe they can use AI tools to predict what we will do. Chinese and North Korean experimentation with large-language models could reshape their perceptions of how and when the United States will use force to defend its allies and partners, such as Taiwan or South Korea. Deterrence is based on threats. It could be difficult to bluff if, after distilling vast amounts of data, an adversary believes he has perfect insight into your thinking or actions and simply doesn’t believe what you say.

 

It will all depend on the datasets. Generative Artificial Intelligence (AI) may create accurate perceptions of the danger of war and thus reinforce deterrence. Or it may just as easily fabricate misperceptions and increase the risk of conflict erupting. We simply do not know. And that is potentially destabilizing, a wild card increasing the risk of war.

Consider some examples. China wants to use AI as part of a “smart deterrence” policy to keep the United States from intervening in a cross-strait contingency. But the unfortunate implication is that the technology—and the blind confidence it instills—could be a catalyst for war rather than a deterrent. As the PLA races to achieve the ability to impose a potential blockade or even invade Taiwan, the role predictive AI could play in Beijing’s calculations is an important emerging unknown factor. Put differently, just because PLA officials think AI has found a “peaceful” way to seize more control over the island democracy doesn’t mean any U.S. leader will behave according to that model. It’s hard enough when a person is in the loop, but over-reliance on AI and machine learning in strategic decision-making could add distortion, contribute to false confidence, and even tip the balance from confrontation into lethal conflict.

Predictive models could be aimed at outpacing and outwitting individual human behavior during conflict. Fighting wars at machine speed and cognition is exactly what the PLA plans to do, according to a PLA Engineering University research team’s paper published in December 2023 in the Chinese academic journal Command Control & Simulation. The study depicts the PLA Strategic Support Force—a military command established in the last decade to integrate emerging technology to achieve strategic effects—experimenting with commercial large language applications similar to ChatGPT to outwit opposing militaries. 

While Baidu’s Ernie Bot and iFlytek’s Spark hardly promise a surefire path to military victory, and despite an understandable desire by tech firms to distance themselves from PLA military activities, they are the early phase of greater reliance on machine learning not just in specific engagements but also for strategic planning. At least Chinese and American defense officials have resumed defense talks at all levels and are beginning to search for common ground regarding AI and its military implications

But take a closer look at what Kim Jong Un’s regime has been doing. Hyuk Kim, a research fellow at the James Martin Center for Nonproliferation Studies of the Middlebury Institute of International Studies, has stressed how North Korea has been quietly developing AI and machine learning for both civilian and military purposes. Initial findings from his comprehensive review of open-source information highlight the use of machine learning to support wargaming tactical artillery fires. He infers that Pyongyang will apply AI and machine learning to strategic planning next. 

That seems a safe assumption, given the Kim regime’s priority on military power. Since the demise of diplomacy with the United States in 2019, North Korea has embarked on a strategic reorientation. Mr. Kim’s blueprint for a five-year sprint to build up strategic arms was announced in January 2021. Within the last year, he has created a strong defense partnership with Russia and abandoned peaceful unification with the South. Adopting a new nuclear doctrine that provides for the delegation of launch authority, combined with recently successful launches of a military reconnaissance satellite, a drone that penetrated the no-fly-zone of South Korea’s presidential offices, a solid-fuel intermediate-range ballistic missile with a hypersonic warhead, a solid-fuel ICBM that can hit all parts of the United States, and an “underwater nuclear attack drone” all suggest a focus on speed, autonomy, and decapitation. AI-assisted speed could make deterrence irrelevant—especially as there is no military-to-military dialogue with North Korea.

At the strategic level, there is increasing evidence that our potential adversaries could use AI tools to test deterrence, level the battlefield, and gain offensive advantage. Revisionist powers such as China and North Korea could find AI a magical breakout solution to end a stalemate, grab a neighboring territory, or test a wobbly rules-based order.

To offset this risk, three areas require renewed policy scrutiny and investment.

First, U.S. policymakers should reinforce the military-to-military channels with China and seek to establish similar discussions with other potential adversaries, including North Korea, Russia, and Iran. Such cooperation may be unlikely in the short term, but we can gain insights even by trying to understand how our potential adversaries think about these technologies. An indirect approach can be useful, which is another reason why we need to renew some version of the U.S.-China Science and Technology Cooperation Agreement. After all, with respect to China, the goal should be de-risking, not decoupling. Discussing the strategic role of AI tools will reduce risk. 

 

Second, the United States should engage in more and better wargaming that tests the role of generative AI at the strategic planning level. Lower-level machine learning and AI tools have proven their use. However, the impact of predictive language models on human perceptions and strategic decision-making is overdue for analysis—before a major power makes a major mistake.

Thirdly, the United States needs to make educating and training the next generation of American professionals who must grapple with security and technology a higher priority. If we are to navigate what National Security Adviser Jake Sullivan has dubbed an “age of disruptive change,” we need leaders and workers in the public, private, and civil sectors to gain a better understanding of new technologies’ complexities, risks, and opportunities.

Defense diplomacy, wargaming, and education alone will not prevent tomorrow’s wars. But a complacent and dilatory approach to anticipating how prospective adversaries will make strategic decisions only compounds tomorrow’s risks.

About the Authors 

Dr. Patrick M. Cronin is the Asia-Pacific security chair at the  Hudson Institute. Follow him on x @PMCroninHudson.

Dr. Audrey Kurth Cronin is the director of the Carnegie Mellon Institute for Strategy and Technology (CMIST) and Trustees Professor of Security and Technology at Carnegie Mellon University. Follow her on X @AKCronin.

Image: U.S. Marines Flickr.