Joint Chiefs vice chairman: Humans must address AI technology issues

A drone
A drone | Contributed photo

The time is now to discuss how new technologies have the power to alter ethics and warfare, Air Force Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, said late last week at a Brookings Institution event.

Selva said the world must grapple with and prepare for the “Terminator Conundrum," which is how he describes the ethics of dealing with artificial intelligence that can pose physical danger to humans.

“You and I are on the cusp of being able to own cars that will drive themselves," Selva said. "The technology exists today. Some of us in this room own cars that park themselves, and they do a substantially better job than we do.”

Despite the advancements in self-driving cars, drone warfare and unmanned underwater, aerial and ground vehicles, there is always a human at the controls, as these remotely piloted vehicles have human operators elsewhere, but someday, the technology will become autonomous.

“We can actually build autonomous vehicles in every one of those categories, and that brings us to the cusp of questions about whether we are willing to have unmanned, autonomous systems that can launch on an enemy," Selva said.

This very question requires ethical considerations beyond the scope of what has been heretofore considered, for the laws of war and the “Terminator Conundrum,” Selva said.

“What happens when that ‘thing’ can inflict mortal harm and is powered by artificial intelligence?” Selva said.

Control is still in the hands of humans, for now, but many of these systems can “learn.” For example, Selva noted that certain sea mines detonate when they hear a certain signature, but humans still write the code.

Coding like this that helps machines learn is called the Deep Learning Concept, which is being examined within the Defense Department, Selva said.

“If we can build a set of algorithms that allows a machine to learn what’s going on in that space and then highlight what is different, it could change the way we predict the weather, it could change the way we plant crops, and it could most certainly change the way we do change-detection in a lethal battle space," Selva said. "What has changed? What is different? What do we need to address?”

Selva said these learning algorithms are the wave of the future and will forever alter technology and ethics.

“The Deep Learning Concept of teaching coherent machines … to advise humans and make them our partners has huge consequences,” Selva said.

Organizations in this Story

U.S. Department of Defense

Want to get notified whenever we write about U.S. Department of Defense ?
Next time we write about U.S. Department of Defense, we'll email you a link to the story. You may edit your settings or unsubscribe at any time.