Skip to content

AI’ll be back

Air Force denies AI “killed” operators in simulation

Still image from Terminator 3: Rise of the Machines (2003)

The Royal Aeronautical Society last week concluded its annual summit in London. The meetup included “just under 70 speakers and 200+ delegates from the armed services industry, academia and the media from around the world to discuss and debate the future size and shape of tomorrow’s combat air and space capabilities.”

Among other cheery tech news, under the subhead, “AI – is Skynet here already?“, one Col. Tucker ‘Cinco’ Hamilton, U.S. Air Force Chief of AI Test and Operations, discussed “the benefits and hazards in more autonomous weapon systems.” He’s been involved in developing autonomous control systems for F-16s that have successfully defeated a human adversary in five simulated dogfights.

Hamilton cautioned that adolescent AI remains too easy to trick and deceive. His testers observed, however, that AI “also creates highly unexpected strategies to achieve its goal,” the Society’s summary reports:

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean [sic] that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

Air Force spokesperson Ann Stefanek subsequently denied to Insider that the Air Force has conducted such simulations.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

All is well. Anything else?

Still image from “The Sorcerer’s Apprentice” segment of Disney’s Fantasia (1940).

Vice adds:

What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Paperclip Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a certain goal. The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed. 

More recently, a researcher affiliated with Google Deepmind co-authored a paper that proposed a similar situation to the USAF’s rogue AI-enabled drone simulation. The researchers concluded a world-ending catastrophe was “likely” if a rogue AI were to come up with unintended strategies to achieve a given goal, including “[eliminating] potential threats” and “[using] all available energy.”

Anecdotally, AI’ll be back.

Until then, something has to be done about drag queens, huh?

Published inUncategorized