How Does the Army Plan to Keep AI Weapons Safe and With Humans Still in Control?

https://www.reutersconnect.com/all?id=tag%3Areuters.com%2C2016%3Anewsml_GF10000386608&share=true
July 23, 2020 Topic: Security Region: Americas Blog Brand: The Buzz Tags: AIRobotsFuture WarAI WeaponsHumansU.S. Army Futures Command

How Does the Army Plan to Keep AI Weapons Safe and With Humans Still in Control?

Here's what AI weapons can (and cannot) do.

 

What if an Artificial Intelligence-enabled combat system simply goes too far by expediting and completing a decision-making cycle beyond that which was intended by a human operator? Can there be specific boundaries or safeguards engineered into a system to properly define and set parameters of possible computerized decisions based on algorithmic calculations? 

U.S. Army weapons developers are carefully considering some of these nuances as the service quickly carves out and “applies” the best warfare uses for fast-increasing levels of AI-empowered autonomous decision making. Perhaps a human, operating in a crucial, decision-making command and control capacity, will need to be sure the machines adhere to necessary safeguards or standards? 

 

“If I direct a machine to do something, what happens when it starts making decisions on its own? How have I properly told it what I want it to do and what is acceptable and what isn’t?... A lot of time people who are desirous of AI behavior patterns have not fully understood that there are a lot of boundaries you have to make sure you pay attention to,” Dr. Bruce Jette, Assistant Secretary of the Army, Acquisition, Logistics and Technology, told TNI in a recent interview. 

Of course, Pentagon doctrine specifies that any use of lethal offensive force would need to be authorized by a human decision-maker, yet short of pre-programmed limits such as preventing an actual weapons firing, are there certain decision-making processes which should not be left solely to a computer. 

“We are trying to develop an interactive approach to AI. How do I get soldiers involved and how do I understand what the implications are? How do I link that into a development program when it comes to aiming a gun?” he said. 

Some of these nuances explain the precise reason Army leaders and senior weapons developers explain that the best use of AI involves human-machine interface or a certain kind of collaborative teaming between the two. While computer automation, or advanced, AI-specific algorithms can perform certain crucial combat functions exponentially faster and more efficiently than humans, there are still many key decisions that need to be made by a human. 

Human cognition is itself an extremely complex, unique and unprecedented phenomenon, not easily mirrored or replicated by even the most advanced machines. Procedural functions, such as data gathering, data organization and essential analytical processes, can be done exponentially faster than humans, vastly improving situational awareness and making potentially life-saving calculations among a host of otherwise too complicated, interwoven variables.

While Jette and other senior weapons developers are clear that proper applications of this kind of technology, drawing upon AI-enabled real-time analytics, brings new paradigm-shifting dimensions to warfare, there is also consensus that there are, without question, faculties of human intuition and problem solving which simply cannot be replaced in any kind of machine. At least… not yet.

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University. 

Image: Reuters