How to Slow the Spread of Lethal AI
Today, it is far too easy for reckless and malicious actors to get their hands on the most advanced and potentially lethal machine-learning algorithms.
Technology reporter Paul Mozur sounded shocked as he described his firsthand experience of First Person View (FPV) drones in Ukraine during a recent podcast appearance on The Daily. During the interview, Mozur recounts being taken to a park just outside Kyiv by a group of young men who had started an autonomous drone company. He describes the company CEO getting on a motorcycle and speeding off down a dirt road, one of the firm’s AI-powered autonomous drones in hot pursuit thanks to its inbuilt complex tracking algorithm. After allowing a brief chase, one of his teammates turns off the autopilot, and the drone ascends into the air to the young men’s laughter. “It’s a funny moment,” Mozur recalls, clearly alarmed, “but the darker reality of it is that if this was an armed drone with a shell, and if they hadn’t hit that button at the end, their CEO would have been a goner.”
Many Ukraine watchers and longtime AI observers would not have shared Mozur’s shock. Both Ukraine and Russia have heavily relied on FPV drones—small, unmanned aerial vehicles guided by a pilot watching a video feed through goggles or on a screen—for reconnaissance, anti-personnel, anti-armor, and anti-artillery operations during the current war. They have played a role in the destruction of thousands of pieces of Russian equipment. It was only a matter of time before AI entered the picture.
AI-enabled autonomous weapons threaten to destabilize the international system. Their significant cost advantages, the widespread availability of the algorithms that power them, and the tactical problems they solve will incentivize their adoption by all manner of actors. These weapons have the potential to save soldiers’ lives. Still, their spread will also empower rogue states, criminal networks, terrorists, and even private corporations and citizens long locked out of the market for precision weaponry. The United States must do what it can to slow their spread.
The decentralization and democratization of warfare thanks to FPVs is already starting to play out in Ukraine, including in Kyiv’s ongoing Kursk offensive. First, FPVs help avoid the massive costs of acquisition and maintenance of a highly-trained surveillance and targeting bureaucracy. Second, FPV drones themselves are significantly cheaper than traditional artillery. Unguided artillery shells cost between $800 and $9,000. GPS-guided shells cost around $100,000, and Javelin anti-tank missile price tags can reach over $200,000, while the typical FPV costs around $400. Given their use as the ultimate guided artillery shell, this cost differential is substantial.
However, the explosive power of payloads carried by FPVs is much smaller than that of a typical round of heavy artillery. The latter can deliver 10kg or so of explosive ordnance with a blast radius of approximately 50 meters. In their current state, similar destructive power can only be delivered by dozens of FPVs striking the same target, each guided by its own pilot. For this reason, FPVs have not completely supplanted artillery, though they have increasingly complemented it. For instance, FPVs have finished off enemy troops escaping fortified positions partially destroyed by artillery fire. Over time, FPVs’ substantial cost advantage will offset their disadvantage in destructive power, especially when “swarm” technology reaches maturity.
Throughout the war, the Ukrainians have had to operate with a severe shortage of artillery shells. In March 2024, Ukraine was firing 2000 shells per day, roughly a fifth of the Russian rate. FPVs, many of them made by volunteers or soldiers using simple electronics and commercial components, have helped close this significant firepower gap. FPVs’ low cost compensates for their relatively low success rate of 50 to 80 percent in destroying targets. Javelin anti-tank missiles enjoy a rate of approximately 90 percent. Since an FPV’s price tag is a minuscule percentage of the cost of the average Javelin, militaries can afford to use more of them, netting the same number of successful strikes at a fraction of the cost.
Compounding this cost advantage is an increase in tactical flexibility thanks to FPVs’ high degree of maneuverability, which enables forces to harass and destroy targets that are beyond the reach of traditional artillery. For example, Ukrainian FPVs have been used to dive into tunnels to attack enemy tanks, chase and destroy speeding vehicles, infiltrate buildings through doorways, attack enemy trenches that are immune from vertical bombing, and chase and eliminate enemy troops.
Finally, FPVs have significantly compressed kill chains. The detection, selection, engagement, and elimination of targets are all carried out by a single operator in a relatively short timeframe. This capability is especially valuable in hostile environments. In Ukraine, neither side has gained uncontested control of the air, dramatically dampening the effectiveness of traditional airstrikes. Contrary to analysts’ early expectations, Ukrainian integrated air defenses, bolstered by Western aid, have been able to deter Russian near-border and cross-border aircraft attacks. As a result, continuing air and missile attacks have discouraged troops on both sides from holding fixed positions for too long. FPVs have helped the clashing armies compensate for this inconvenience by pursuing and destroying moving units, vehicles, and personnel.
These documented cases all involve human pilots, but AI is beginning to play a more prominent role. When developers perfect an algorithm that can effectively control swarms rather than individual drones, massively increasing FPVs’ destructive potential, AI’s role will become larger still. The recent successful test of swarm technology by Swarmer, a software company based in Wilmington, Delaware, is a significant step in this direction. Swarms are superior to individually operated drones because of drone-to-drone communication, which allows sensory information gathered by one drone to be transmitted directly to the entire group and adjust its behavior without requiring further input from the operator or commanders. The Defense Advanced Research Projects Agency (DARPA) is currently heavily invested in developing swarm technology to perform reconnaissance and ground troop protection in hostile urban environments. Their research agenda specifically envisions autonomous swarms, suggesting that AI will play a crucial role.
FPVs are not the only class of autonomous weapons powered by AI. Developers in Ukraine have also begun testing automated machine guns, which can identify targets and aim at them automatically, only requiring a soldier to press a button on a console to take the shot. Notably, the soldier in question can sit in a bunker at some distance from the gun and thus be protected from counterfire. Israel has used similar weapons platforms at the Gaza border and West Bank checkpoints for several years. Another example is the Autonomous Collaborative Combat Aircraft (CCA), a class of autonomous fighter planes currently in development by the U.S. Department of Defense. These AI-powered, uncrewed aircraft will operate alongside crewed fifth- and sixth-generation fighters, receiving and implementing orders for a variety of missions, including electronic warfare, reconnaissance, and dogfighting. The direction of travel is clear: the future of warfare is increasingly automated, and AI will be front and center in this new world.
As the Ukrainian case makes clear, AI-powered autonomous weapons have the potential to significantly lower the barriers to acquiring high levels of precision firepower. Whereas traditional, high-yield, GPS-guided munitions will likely remain prohibitively expensive to acquire and operate in large quantities for all but the wealthiest states, AI-powered drones will not. Though the United States and its allies are surely grateful for this fact in the Ukrainian case, they may be less so when their adversaries begin to follow Kyiv’s example. Russia’s and Iran’s use of the Shahed-136 drones to wrest back momentum on the frontlines in Ukraine, devastate Ukrainian civilian infrastructure, and terrorize Israeli civilians at long range attest to this. As both the examples of the Ukraine war and the metastasizing conflagration in the Middle East highlight, the lethal AI revolution is quickly overturning long-established military power relations. These are precisely the volatile geopolitical conditions under which disastrous miscalculations can occur.
Even more concerningly, in the age of lethal AI, disastrous escalation does not need to involve any geopolitical calculation whatsoever. Ukrainian military officials have highlighted their efforts to immunize their drones against Russian electronic warfare by handing over the targeting and engagement process to onboard AI modules, then severing telecommunications once the operator has locked onto a selected target. Yet researchers have expressed serious doubts about manufacturer assurances of their algorithms’ ability to discriminate between friendly and enemy, or military and civilian, targets. Given these technical deficiencies, the risk of unintended escalation seems unacceptably high.
The escalation problem of uncontrolled, lethal AI systems extends beyond drone targeting. Secretary of the Air Force Frank Kendall has admitted that the decisive advantage of fully autonomous systems over those with a “human in the loop” is likely too great for military powers to ignore completely. Despite almost a year of negotiation and one of Henry Kissinger’s dying wishes, the world’s AI superpowers, the United States and China, have still not agreed to ban autonomous command-and-control of nuclear weapons. Though this lack of public commitment may simply be posturing, the threat is simply too grave to dismiss out of hand. It is risky enough for the U.S. and China to integrate AI into military command-and-control systems across domains separately. It is a recipe for disaster if scores of nation-states, and eventually non-state actors, field an array of weapons systems launched, piloted, and fed intelligence in split seconds by integrated AI systems without an opportunity for meaningful human deliberation.
Following the signing of the 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy by over sixty countries (spearheaded by the United States), negotiators at the UN and civil society organizations are making painstaking progress toward a treaty on lethal AI systems. Nevertheless, these efforts outline few mechanisms to enforce their rules or to prevent the “spoiler problem,” whereby unprincipled new entrants, rogue states, and non-state actors exploit the self-restraint of more scrupulous companies and polities. Without enforcement mechanisms in place soon—likely before any treaty is even finalized, and certainly before the most advanced lethal AI algorithms widely proliferate—the measures under consideration may be dead on arrival.
Currently, it is far too easy for reckless and malicious actors to get their hands on the most advanced machine learning algorithms, including foundation models whose applications are getting more general by the day. The status quo of open-source sharing of advanced algorithms and the exclusion of those algorithms from export-control regimes is untenable from the perspective of U.S. national security. The United States must update its export-control policy on advanced software, implement a sanctions regime on irresponsible actors—including those within allied countries—and establish policies on AI Dual-Use Research of Concern (DURC), akin to longstanding and emerging biosecurity practices to reflect the destabilizing risks of weaponized AI.
The United States should establish an interagency task force to define AI DURC precisely and establish policies for its responsible handling and operationalization. Practices like encryption, meaningful human oversight by design, robustness to misalignment, and supply chain auditing are examples of key guidelines on which the task force should weigh in. Given the slippery nature of algorithmic code and research, the United States needs to apply more coercive measures to enforce AI DURC guidelines than it does the present biosecurity DURC guidelines.
Among the goals of these policies would be to make it much harder for developers to open-source advanced algorithms with potential combat applications. The president can immediately make progress on this front by issuing an executive order barring federally funded researchers, contractors, and grantees from publishing or sharing code for such algorithms without a waiver from the Bureau of Industry and Security (BIS). The most important targets would be swarm intelligence algorithms trained on multi-media inputs and capable of exercising analytical judgment—key characteristics of AI algorithms that can automate the entire kill chain.
Given the current state of technological competition with foreign adversaries, the United States must remain a world leader in AI innovation. This framework should not hamper this goal. The National Science Foundation (NSF) is currently running a pilot project to connect U.S. AI researchers to educational resources with the goal of facilitating discovery and innovation. This infrastructure should be expanded to create a platform through which registered developers can share ideas on DURC topics. This forum would be closed to foreign adversaries while facilitating collaboration among American researchers. The United States should exempt other countries from some of the more stringent regulations proposed here and grant them access to the NSF exchange if they can credibly demonstrate enforcement of mutually acceptable AI DURC safety standards. Taken together, these new regulations and NSF exchange would balance the urgent national security imperative outlined above with the need to continue leading the world in AI innovation.
A vital benefit of this approach is that it would enable the president to immediately leverage American economic and technological prowess to build momentum toward international agreements on AI safety. It would signal sincerity to key players in those negotiations and ignite an iterative process of developing enforcement mechanisms for international AI safety standards, making negotiated treaties more plausible. Even without a grand international treaty, full implementation of these proposed policies would bring some order and due diligence to the American AI research space, help protect it from foreign exploitation, and preserve a competitive edge for American innovation and power. At the very least, this approach would slow down the adoption of autonomous weapons by adversaries of the American-led global order, giving the United States and allied militaries time to adapt.
America and its allies have an overwhelming interest in securing dominance in military AI and protecting international stability. The status quo puts these objectives in dire peril. The policies outlined above would not safeguard this dominance and stability alone, but they would represent a massive step in the right direction.
Anthony De Luca-Baratta is a Google Public Policy Fellow at the Aspen Institute, where his work focuses on AI governance. He worked as an intern at the Center for the National Interest during the summer of 2024, where his research centered on technology and defense policy. He is an MA International Relations student and Public Service Fellow at the Johns Hopkins School of Advanced International Studies (SAIS), where he focuses on AI governance, national security, and American grand strategy.
Josh Curtis is a member of Foreign Policy for America's NextGen Initiative and a Public Service Fellow and Master's candidate at the Johns Hopkins University School of Advanced International Studies, whose research focuses on science and tech diplomacy, AI safety, and international security. Before coming to SAIS, he served as Special Assistant for Policy & Strategy at the National Endowment for Democracy, where he helped shape the organization's emerging tech and digital democracy grantmaking strategies.
Image: Parilov / Shutterstock.com.