Are We Due for an AI Winter?
AI technology is remarkable, but defense policymakers must be prepared for its possible stagnation.
Given the complexity of the pursuit of AI-enabled autonomy, efforts by organizations like the DIU should be guided by a simple truth: algorithms support applications. It is critical that the DoD not fall into an extrapolation trap, assuming that progress in some areas today means progress will continue indefinitely—at some point, a deliberate hammering out of adoption goals based on the needs of state-of-the-art AI systems themselves may be as important as the funding of such organizations. Echoing the importance of emerging research paradigms in AI above, even the most bespoke AI applications in defense may eventually require a more fundamental innovation that makes them possible.
Third, use academic input to retain intellectual sobriety throughout critical industry dealings.
Over the past year, the U.S. defense bureaucracy has attempted to assert more control over the “technical baseline” in AI-enabled systems. This creates tensions between the DoD and the private sector, where companies like OpenAI closely guard such technical secrets. Pentagon deputy CTO for critical technologies Maynard Holliday thus expressed interest in input from industry, academic, and defense representatives on various issues. This echoes more recent news that the U.S. Army is seeking industry input on an initiative that would have companies “disclose the provenance of their artificial intelligence algorithms.”
Balancing the expertise found in industry and academia—particularly as generative AI’s extraordinary computing power requirements widen the gap between their respective abilities to experiment directly with such systems—can be addressed in the same breath as generative AI’s potential winter. Specifically, individual industry AI leaders must be consciously separated from industry AI research, development, and other defense-relevant trends while recruiting academic expertise for a sorely needed intellectual sobriety on where AI is realistically heading.
Industry leaders have every incentive to inflate the trajectory of state-of-the-art AI systems but frequently employ an extrapolation fallacy. Scale AI CEO Alexandr Wang, for example, uses the successes of the Diplomacy-playing agent “Cicero” along with other gameplaying AI agents as evidence that warfare will be dramatically transformed in “less than ten years.” It is unclear how this conclusion can be drawn. Academic expertise would not be recruited to put down or oppose such claims by default but instead to recontextualize AI developments in a manner that keeps the DoD’s AI adoption efforts grounded.
Vincent J. Carchidi is a Non-Resident Scholar at the Middle East Institute’s Strategic Technologies and Cyber Security Program. His work focuses on the intersection of emerging technologies, defense, and international affairs. His opinions are his own.
Image: Shutterstock.