Artificial Intelligence, Authentic Humanity, and the Future of War
Buongiorno, un interessante punto di vista da una associazione che promuove il «contemporary british military thought». --8<---------------cut here---------------start------------->8--- [...] If the wars of the 21st Century have imparted a singular lesson it is this: having our troops mired in ambiguous situations, unable to see into the next compound, incapable of loitering without exposing themselves to mortal risk is increasingly unacceptable to the generals, politicians, and citizens who—ultimately—put them there. Enter technology: uninhabited, quasi-autonomous weapons systems designed to undertake these kinds of prolonged and dangerous measures are perfect solutions for the kinds of stability operations we repeatedly and incessantly find ourselves in. Algorithms already create search patterns, power facial and signature recognition systems, and remain at their posts indefinitely. [...] This obsession with technology is an extension of two impulses that have driven military development: to overcome human weaknesses and to outsource as much of the brutal aspects of war to others [...] We constantly strive for “the externalization of the burden of warfare”, whereby we seek out “technological and human surrogates [that] enable the state to manage the risks of post-modern conflict remotely.”6 We are aware now that those risks are not just corporeal; contemporary warfare not only ends lives and removes limbs. Combat has the power to wound mentally those who take part in it, causing critical stress that can lead to secondary damage in the form of haunting nightmares, chemical dependency, anti-social behaviour, and, all too often, suicide. Even drone pilots, often thousands of kilometers away from their targets, are not wholly liberated from this trauma.7 In that way, AI represents nothing more than the latest attempt to put distance—literal and figurative—between humans and the injury, both physical and moral, caused by war.8 However, AI does promise something revolutionary in this regard: it would not just displace humans from the battlefield, it could replace them altogether. [...]We feel more comfortable with a human ‘in the loop’ making the final decision about when or if to pull the trigger. AI might be able to do the sums rapidly, but do we trust it with matters of life or death? The kind of future we want “should be a society where people feel empowered, not threatened by AI.”10 That just makes sense…right? [...]It is clear that the slowest portion of the AI/human loop will forever be the human. There will be little sense in building faster AI that can perform 99% of the find, fix, fire cycle in a fraction of a second, only to need to wait for a ponderous human to give the thumbs up. This has already led to a retreat from a posture of positive control (human in the loop) to one of monitoring (human on the loop), whereby a human keeps watch on what is going on, ready to intervene at the first sign of trouble. The reality is, though, that this position is untenable and illusory. Human monitors will be overwhelmed by the speed, scale, and scope of the systems they are charged with; indeed, they may wish to create AI systems solely for aiding in the monitoring process, which of course would lead to a cycle of infinite recursion. Even if rapidity and size could be tamed, our human monitors would still be faced with deeper problem: “No one really knows how the most advanced algorithms do what they do.”13 We have created tools that we do not fully understand, and may not be able to control. [...] war, as we know it, cannot be strictly viewed as logical: there are too many ambiguities—what rational actor theorists, like economists—might call ‘externalities’. If war is disembedded from its surrounding social and political—its human context15—what is to prevent it from reaching the end point of absolute war? War’s nature, as we know it now, is founded on human particularities and bounded by our failings. If those limits were to be removed through the application of Artificial Intelligence, then the nature of war—long held to be unchanging—would melt away, transformed into something non-human, potentially inhuman, probably inhumane. When waging war, authentic humanity is our safety net; alarmingly, artificial intelligence has already begun to pick at its all too gossamer threads. [...] What must be done, though, is to match the efforts at weaponizing artificial intelligence with a commitment to maintaining our authentic humanity. We need doubt, pity, and hesitation if we are to survive. [...] --8<---------------cut here---------------end--------------->8--- Continua su https://web.archive.org/save/https://wavellroom.com/2020/06/30/melancholic-a... IMHO nell'ultima parte (What is to be done?) perde un po' di lucidità di analisi, forse perché l'autore non ha sufficiente conoscenza dei principi dell'AI, ma ciò non toglie nulla al resto. Saluti, Giovanni -- Giovanni Biscuolo
participants (1)
-
Giovanni Biscuolo