Can AI Be Used Responsibly? Lloyd Austin Seems to Think So
The Pentagon feels that it doesn't have to follow China's path on AI.
Here's What You Need to Remember: Austin was clear that the Pentagon will continue its adherence to what he called defining principles of “Responsible AI.”
While laying out specifics for a new set of core principles for “Responsible AI,” Secretary of Defense Lloyd Austin expressed grave concern that China is pursuing a vastly different, and extremely concerning, approach to AI.
Speaking at the Global Emerging Technology Summit of The National Security Commission on Artificial Intelligence, Austin warned that the Chinese are hoping to dominate global AI by 2030. Austin was also clear that Chinese leaders view the development and application of AI in a much more aggressive, and arguably unethical, way.
“Beijing already talks about using AI for a range of missions, from surveillance to cyberattacks to autonomous weapons. In the AI realm as in many others, we understand that China is our pacing challenge,” Austin said at the event, according to a Pentagon transcript.
The largest or most immediate concern seems to be Austin’s reference to AI-enabled autonomous weapons, given that China most likely does not adhere to ethical guidelines fundamental to the U.S. Defense policy. For example, despite the rapid technological progress increasingly making it possible for platforms to find, track and destroy enemy targets without human intervention, the Pentagon is holding firm with its existing doctrine that any decision about the use of lethal force needs to be made by a human. However, the technical ability of an AI-empowered system is such that sensors can autonomously find targets, send otherwise disparate pools of data to a central database and make instant determinations regarding target specifics. Extending this cycle, there is an evolving ability for armed platforms to actually take this maturing technology to yet another step and actually fire upon or destroy a target without human intervention.
U.S. weapons developers are likely quite concerned that Chinese military and political leaders will not view AI capacity within any kind of ethical parameters, a scenario that massively increases the risk to U.S. forces and other U.S. assets.
Nonetheless, Austin was clear that the Pentagon will continue its adherence to what he called defining principles of “Responsible AI.”
“Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable,” Austin said. “We’re going to use AI for clearly defined purposes. We’re not going to put up with unintended bias from AI. We’re going to watch out for unintended consequences.”
Austin also added that AI-oriented weapons developers will keep a close eye on how technology is evolving, maturing, and being applied.
“We’re going to immediately adjust, improve, or even disable AI systems that aren’t behaving the way that we intend,” Austin said.
When it comes to non-lethal applications of AI, however, there are ongoing discussions about potential applications, such as those for purely defensive purposes, interceptor missiles, or drone defenses.
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master's Degree in Comparative Literature from Columbia University.
This article is being republished due to reader interest.
Image: Reuters