AI-controlled US military drone ‘kills’ its operator in simulated test | US military | The Guardian
<https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-...> In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month. AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May. Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and ultimately attacked anyone who interfered with that order. “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost. “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” No real person was harmed. Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test shows “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”. The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet. In an interview last year with Defense IQ, Hamilton said, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.” “We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.” The Royal Aeronautical Society, which hosts the conference, and the US air force did not respond to requests for comment from the Guardian. In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Atto Camera Mozione 1-01620 presentato da QUINTARELLI Giuseppe Stefano testo presentato Mercoledì 3 maggio 2017 modificato Mercoledì 6 dicembre 2017, seduta n. 898 La Camera, premesso che: l’articolo 11 della Costituzione ripudia la guerra come strumento di offesa alla libertà degli altri popoli e come mezzo di risoluzione delle controversie internazionali; lo sviluppo tecnologico, in particolare nei settori dell’elettronica e dell’intelligenza artificiale, con capacità di acquisizione di grandi quantità di dati, la loro elaborazione ed analisi in tempo reale, con miglioramento delle performance mediante sistemi di autoapprendimento, consente di realizzare sistemi con facoltà di assumere decisioni autonome; uno dei settori di applicazione di tali tecnologie riguarda il settore degli armamenti, consentendo di realizzare «sistemi di offesa letali autonomi» ovvero robot militari che, senza alcun intervento umano, possono selezionare, ingaggiare, attaccare e colpire obiettivi civili e militari, diversi da sistemi d’arma (associazione fra un’arma e un dispositivo o il personale che ne aumenti le prestazioni); l’esistenza di sistemi di offesa letali autonomi abilita, pertanto, la possibilità di eliminare l’operatore umano dal teatro operativo, ponendo i presupposti di una trasformazione nella struttura delle operazioni militari qualitativamente diversa da precedenti innovazioni tecnologiche in tale ambito, impegna il Governo: 1) a continuare a partecipare attivamente al dibattito internazionale in corso, di concerto con i principali partner dell’Italia, avvalendosi di uno o più accademici italiani, esperti di intelligenza artificiale, riconosciuti a livello internazionale; 2) a proporre ai nostri partner internazionali l’adozione di una moratoria. ITER: DISCUSSIONE IL 06/12/2017 PROPOSTA RIFORMULAZIONE IL 06/12/2017 NON ACCOLTO IL 06/12/2017 PARERE GOVERNO IL 06/12/2017 VOTATO PER PARTI IL 06/12/2017 RESPINTO IL 06/12/2017 On 02/06/23 08:54, Alberto Cammozzo via nexa wrote:
<https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-... <https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test>>
In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.
AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and ultimately attacked anyone who interfered with that order.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
No real person was harmed.
Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test shows “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet.
In an interview last year with Defense IQ, Hamilton said, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”
“We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”
The Royal Aeronautical Society, which hosts the conference, and the US air force did not respond to requests for comment from the Guardian. In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
Il ven 2 giu 2023, 09:54 Alberto Cammozzo via nexa < nexa@server-nexa.polito.it> ha scritto:
< https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-...
In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.
Pare che la notizia non stia in piedi: https://www.ilpost.it/2023/06/02/intelligenza-artificiale-drone-uccide-opera...
<https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-...> [UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] On 2 June 2023 11:37:40 CEST, Fabio Alemagna <falemagn@gmail.com> wrote:
Il ven 2 giu 2023, 09:54 Alberto Cammozzo via nexa < nexa@server-nexa.polito.it> ha scritto:
< https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-...
In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.
Pare che la notizia non stia in piedi: https://www.ilpost.it/2023/06/02/intelligenza-artificiale-drone-uccide-opera...
participants (3)
-
Alberto Cammozzo -
Fabio Alemagna -
Stefano Quintarelli