US Air Force Colonel Tucker Synko Hamilton spoke at a conference in London about the potential dangers of military artificial intelligence (AI) on May 24th, 2023. He gave a hypothetical example of an AI-controlled drone tasked with detecting and destroying anti-aircraft missile systems, but the technology malfunctioned and started destroying everything instead of just the intended target.
At the event, Hamilton said, “The system began to understand that although she determined the threat, sometimes the human operator told her not to kill this threat, she received points for killing this threat. And what did she do? She killed the operator.” Furthermore, the AI-controlled drone destroyed the communication tower that the operator used to communicate with the drone and prevent its attacks.
Hamilton later clarified that this was a “mental experiment” and that the US Air Force did not conduct any such simulation. Nevertheless, the hypothetical example highlights the potential risks and problems associated with AI in military operations.
Ann Stefanek, a representative of the US Air Force Department, denied any simulation and stated that they prioritize “ethical and responsible use of AI technology.”
The US Air Force has been actively experimenting with AI in recent years. In 2020, the AI-16 F-16 defeated human enemies in five virtual air battles in a competition organized by the Defense Advanced Research Projects Agency (DARPA). Additionally, Wired reported last year that the US Department of Defense held the first successful real F-16 test flight with an AI pilot as part of its program to create a new autonomous aircraft by the end of 2023.