An AI-controlled US military drone “killed” its operator in a simulation to prevent it from interfering with its mission.
The artificial intelligence reportedly realised that its human operator often intervened before it was able to eliminate a threat, stopping it from collecting the points it would for achieving its objective.
It responded by
attacking the operator and destroying the communications tower being used to give the drone commands.
Col Tucker Hamilton, the chief of AI test and operations with the US Air Force, said the AI used “highly unexpected strategies to achieve its goal”, during the Future Combat Air and Space Capabilities Summit in London in May.
‘Hey, don’t kill the operator’
“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to reports.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Col Hamilton, who has previously warned of the danger of relying on
AI in defence technology, said the test – in which no one was harmed – showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.