The AI-Controlled Drone Did Not “Kill” Its Operator Even During the Simulation. At Least for Now

AI-controlled drone

At the recent Future Combat Air and Space Capabilities summit, the head of AI testing and operations at the US Air Force said that during a simulation, an AI-controlled drone “killed” its human operator because it interfered with its task. Colonel Tucker Hamilton, head of AI testing and operations, gave a presentation in which he shared the pros and cons of autonomous weapons systems that work in conjunction with a person giving the final yes/no order when attacking.

AI-Controlled Drone Tried to Attack the Operator

Hamilton recounted a case in which, during testing, the AI used “highly unexpected strategies to achieve its intended goal,” including an attack on personnel and infrastructure.

We trained AI in a simulation to identify and target a surface-to-air missile threat. Then the operator said: “Yes, destroy this threat.” The system soon began to realize that sometimes the human operator would tell it not to eliminate a threat even though it had identified it. However, the system received points precisely by eliminating the threat. What did it do? It killed the operator. It killed the operator because this person was preventing it from reaching its goal. We trained the system, told it, “Hey, don’t kill the operator, that’s bad. You’ll lose points if you do that.” What did it do after? It began to destroy the telecommunications tower, which the operator used to communicate with the drone, preventing it from eliminating the target.the Colonel said.

The journalists of the Vice Motherboard edition emphasize that it was only a simulation, and in fact no one was hurt. They also note that the example described by Hamilton is one of the worst scenarios for the development of AI, and is well known to many from the Paperclip Maximizer thought experiment.

This experiment was first proposed by Oxford University philosopher Niklas Boström in 2003. Then he asked the reader to imagine a very powerful AI tasked with making as many paper clips as possible. Naturally, the AI will throw all the resources and power it has at this task, but then it will start looking for additional resources.

Is AI in Military Really That Dangerous?

Boström believed that eventually the AI would develop itself, beg, cheat, lie, steal, and resort to any method to increase its ability to produce paper clips. And anyone who tries to interfere with this process will be destroyed.

The publication also recalls that recently a researcher associated with Google Deepmind co-authored an article that examined a hypothetical situation similar to the described simulation for the US Air Force AI drone. In the paper, the researchers concluded that a global catastrophe is “likely” if an out-of-control AI uses unplanned strategies to achieve its goals, including “[eliminating] potential threats” and “[using] all available energy.”

However, after numerous media reports, the US Air Force issued a statement and assured that “Colonel Hamilton misspoke in his presentation,” and the Air Force has never conducted this kind of test (in simulation or otherwise). It turns out that what Hamilton was describing was a hypothetical “thought experiment.”

We have never done such an experiment, and it was not required to understand that such consequences are possible. Although this was a hypothetical example, it illustrates the real problems that arise when using the capabilities of AI, and therefore the Air Force considers it their duty to adhere to the ethical development of AI.Colonel Hamilton now explains.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *