Should Drones and AI Be Allowed to Kill by Themselves? 

https://www.reutersconnect.com/all?id=tag%3Areuters.com%2C2018%3Anewsml_RC11643A6720&share=true
August 19, 2020 Topic: Security Region: World Blog Brand: The Buzz Tags: DronesAIRobotsMachinesAmericaChinaLethal ForceHuman Operator

Should Drones and AI Be Allowed to Kill by Themselves? 

It’s a simple question, should robots kill by themselves?  The technology is here. Unmanned systems, both ground and air robots, can autonomously seek, find, track, target and destroy enemies without human intervention. 

 

It’s a simple question, should robots kill by themselves? 

The technology is here. Unmanned systems, both ground and air robots, can autonomously seek, find, track, target and destroy enemies without human intervention. 

 

The question is generating global debate and inspiring discussions about the ethical boundaries of armed conflict. The United States, along with allies and rivals alike, are understandably engaging this issue with vigor. 

Forward-positioned armed robots could use built-in sensors to detect groups of enemy fighters to cue attack weapons. Aerial drones could simply attack targets with Hellfire missiles as they now do, yet while skipping the current step requiring human operator approval. 

Pentagon leaders maintain that the lethal use of military force must align with or operate within legal and ethical parameters. 

“We are not going to use these without operating within the standards of armed conflict and international humanitarian law. You would not want robots with self-agency indiscriminately out on the battlefield making life and death decisions. That would not happen. You would have to have rules of engagement,” Lieutenant General Jack Shanahan, Director, Joint Artificial Intelligence Center, told The Mitchell Institute for Aerospace Studies in a special video series. 

There is, however, some room for ambiguity, one might say. What about the “defensive” use of AI-enabled weapons? Certainly, missile interceptors, defensive lasers and anti-drone jammers could operate without needing human approval. 

What if a machine were able to discern an approaching drone attack, calculate the best method of response, and independently fire interceptor weapons and explosives? Such technology can not only save time when forces are under enemy fire but save lives as well.

At some point, should machine-controlled defenses be given the technical ability to destroy manned fighters when under attack? Just for defense? Would that violate existing doctrine? Maybe. Current U.S. doctrine specifies that any use of “lethal” force will need to be decided by humans. 

Yet, Shanahan points out that many weapons already use varying degrees of autonomy. 

“We are using autonomous systems in weapons today with AI-generated precision-guided munitions and armed uninhabited aircraft,” Shanahan said. 

He is correct. Many weapons already leverage autonomous and semi-autonomous guidance systems, enabling them to shift course in flight to adjust and hit moving targets. The Navy’s SM-6, for instance, uses a dual-mode seeker to interpret return electromagnetic signals and adjust course. Also, the Maritime Tomahawk used new levels of advanced radio throughput to change track in flight, and even the existing Block IV Tomahawk has a two-way data link and “loiter” technology to shift attack should new targets emerge. While targets can be pre-programmed by humans of course, autonomous weapons technologies raise questions about emerging or new targets. 

What this means, when placed in a tactical context, is that a weapon en route to destroy an unmanned enemy vehicle might change its trajectory in-flight to acquire a new “human” target, thereby using lethal force. Wouldn’t this constitute the autonomous use of lethal force? The weapon would be using its own internal technical systems to autonomously switch targets from a non-human target to a human target? Did a human operator direct the weapon to change course? In many cases, weapons can now do that themselves. 

“We are going to lead the world in these discussions. We don’t want China to take over this discussion and say the right thing, but do something entirely different,” Shanahan said. 

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University. 

Image: Reuters