Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence

February 5, 2021 Topic: military Region: Americas Blog Brand: The Buzz Tags: Artificial IntelligenceMoralityOODA LoopAutonomous Weapons

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence

Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes.

 

Finding an effective balance between humans and artificial intelligence (AI) in defense systems will be the sticking point for any policy promoting the distancing from “humans in the loop.” Within this balance, we must accept some deviations when considering concepts such as the kill chain. How would a progression of policy look within a defense application? Addressing the political, technological, and legal boundaries of AI integration would allow the benefits of AI, notably speed, to be incorporated into the kill chain. Recently, former Secretary of Defense Ash Carter stated “We all accept that bad things can happen with machinery. What we don’t accept is when it happens amorally.” Certainly, humans will retain the override ability and accountability without exception. Leaders will be forever bound by the actions of AI guided weapon systems, perhaps no differently than they would be responsible for the actions of a service member in combat and upholding ethical standards of which the AI has yet to grasp.

The future of weapon systems will include AI guiding the selection of targets, information gathering and processing, and ultimately, delivering force as necessary. Domination on the battlefield will not be in traditional means, rather conflicts dominated by AI with competing algorithms. The normalcy of a human-dominated decisionmaking process does provide allowances for AI within the process, however, not in a meaningful way. At no point does artificial intelligence play a significant role in making actual decisions towards the determination of lethal actions. Clearly, the capability and technology supporting integration have far surpassed the tolerance of our elected officials. We must build confidence with them and the general public with a couple of fundamental steps.

 

First, information gathering and processing can be controlled primarily by the AI with little to no friction from officials. This integration, although not significant by way of added capability in a research and development (R&D) perspective, will aid in building confidence and can be completed quickly. Developing elementary protocols for the AI to follow for individual systems such as turrets, easy at first then slowly increasing in difficulty, would allow the progression of technology from an R&D standpoint while incrementally building confidence and trust. The inclusion of recognition software into the weapon system would allow specific target selections, be it civilians or terrorists, of which could be presented, prioritized, and then given to the commander for action. Once functioning within a set of defined perimeters confidently, you can increase the number of systems for overlapping coverage. A human can be at the intersection of all the data via a command center supervising these systems with a battlement management system; effectively being a human “on” the loop with the ability to stop any engagements as required or limiting AI roles based on individual or mission tolerance.

This process must not be encapsulated solely within an R&D environment. Rather, transparency to the public and elected officials alike, must know and be accepting. Yes, these steps seem elementary, however, they are not being done. Focus has been concentrated on capability development without a similar concern for associated policy development when both must progress together. Small concrete steps with sound policy and oversight are crucial. Without such an understanding, decisionmakers cannot in their conscience approve, rather defaulting to the safe and easy answer, “no.” Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes. It would be foolish to believe our adversaries and their R&D programs are being held up on AI integration due to moral and public support requirements; the Chinese call it “intelligentized” war and have invested heavily. Having humans “on” the loop during successful testing and fielding will be the bridge to additional AI authorities and public support necessary for the United States to continue to develop these technologies as future warfare will dictate.

John Austerman is an experienced advisor to senior military and civilian leaders focusing on armaments policy primarily within research and development. Experience with 50+ countries and the Levant to include hostile-fire areas and war zones.

Image: Reuters.