Premessa: rompo il thread perché mi pare giusto così :-) Giovanni Leghissa <giovanni.leghissa@unito.it> writes:
un interessante riferimento al nesso guerra/AI https://warontherocks.com/2020/01/ai-cyberspace-and-nuclear-weapons/
grazie infinite, è un riferimento preziosissimo! mi permetto di seglalare un interessante articolo della stessa serie "AI and Nationa Secutiry" [1] che analizza il nocciolo della questione, ovvero: "Human context and agency are integral to decision-making" "59 percent likely hostile" https://warontherocks.com/2020/01/59-percent-likely-hostile/ mi permetto anche di farne un executive summary (così decidete se affrontare la lettura, l'articolo è lungo): --8<---------------cut here---------------start------------->8--- Military operators should understand — and be trained to engage with — probabilistic outcomes when working with AI-enabled technology. Without an appreciation of how to apply “common sense,” operators will lose confidence in these technologies when the model inevitably appears to get things wrong. Understanding the following three concepts will empower users to best utilize AI applications: probability, cognitive bias, and deconstructing the decisions made by software. [...] While the debate rages on over what AI applications should and could be developed, there is a desperate need to take a step back and have a conversation about probability: what it is, how it is understood, and how statistical models can be integrated into decision making. [...] The 2016 presidential election was a perfect example of how probabilities can be misread. [...] The key difference between these scenarios is the context and the bias that each of us brings to the table. This is something extremely important to understand with applications of statistical models for warfighters. [...] If a software program suggests a course of action that the operator doesn’t understand or intuitively agree with, the operator will typically either surrender his agency to the computer or dismiss it out of hand. [...] the Department of Defense must provide technical experts to help bridge the gap between military users and developers of AI technology.[...] This will ensure that even if applications are not developed in-house, end users will have the ability to fully understand and best utilize them. [...] Pulling back the curtain on how AI works and on how we understand it will demystify AI, and empower all military operators to use the myriad AI systems proliferating today. [...] As the Department of Defense looks to embrace AI-powered technologies, it should focus not just on creating the best technology possible, but on making sure its people are equipped to utilize the technology. [...] Military AI applications, with their inherent risks for moral hazard, will and must retain a human in the loop. Human context and agency are integral to tactical decision-making, and AI/operator teams must work in concert to be most effective. Warfighter buy-in is of the utmost importance, and will only be achieved by a better grounding in probability and cognitive biases. --8<---------------cut here---------------end--------------->8--- lo trovo davvero un bel pezzo [...] Saluti, Giovanni [1] https://warontherocks.com/category/special-series/ai-and-national-security/ -- Giovanni Biscuolo Xelera IT Infrastructures