Don’t worry about AI going bad – the minds behind it are the danger | John Naughton | Opinion | The Guardian
<https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligenc...> [] , If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact – pace Mr Gibson – that lower-level but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics. This technology is a combination of machine learning and big data and it’s everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies. These corporations regard this version of “weak” AI as the biggest thing since sliced bread. The CEO of Google burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions. If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it’s legal, ethical and good for society – and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will. Fortunately there are scholars who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their 2017 report makes interesting reading. Last week saw the publication of more in the same vein – a new critique of the technology by 26 experts from six major universities, plus a number of independent thinktanks and NGOs. Its title – The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation – says it all. The report fills a serious gap in our thinking about this stuff. We’ve heard the hype, corporate and governmental, about the wonderful things AI can supposedly do and we’ve begun to pay attention to the unintentional downsides of legitimate applications of the technology. [ ]
Caspita! Questo sarà esattamente il tema del mio intervento al convegno dell'Università Milano Bicocca “MYTHS AND REALITY OF ARTIFICIAL INTELLIGENCE” Thoretical issues and practical developments" il 2 Marzo. La confusione o l'incompetenza delle autorità e' veramente pericolosa. Per esempio qualche tempo fa leggevo prospettare lo scenario in cui le assicurazioni automobilistiche non avrebbero piu' coperto gli automobilisti che guidassero personalmente la propria auto, in quanto l'intelligenza artificiale avrebbe guidato meglio. E' piuttosto improbabile che questo avvenga. Deve essere invece chiaro che le case automobilistiche saranno responsabili, civilmente e penalmente, delle morti avvenute durante il self drive. Le case automobilistiche, ovvero la proprietà (e forse anche gli azionisti), NON il software di simulazione statistica alla guida. Giacomo 2018-02-25 9:21 GMT+01:00 Alberto Cammozzo <ac+nexa@zeromx.net>:
<https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligenc...>
[] , If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact – pace Mr Gibson – that lower-level but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics. This technology is a combination of machine learning and big data and it’s everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.
These corporations regard this version of “weak” AI as the biggest thing since sliced bread. The CEO of Google burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions. If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it’s legal, ethical and good for society – and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.
Fortunately there are scholars who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their 2017 report makes interesting reading. Last week saw the publication of more in the same vein – a new critique of the technology by 26 experts from six major universities, plus a number of independent thinktanks and NGOs.
Its title – The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation – says it all. The report fills a serious gap in our thinking about this stuff. We’ve heard the hype, corporate and governmental, about the wonderful things AI can supposedly do and we’ve begun to pay attention to the unintentional downsides of legitimate applications of the technology. [ ] _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
On 2018-02-26 01:10, Giacomo Tesio wrote:
Caspita!
Questo sarà esattamente il tema del mio intervento al convegno dell'Università Milano Bicocca “MYTHS AND REALITY OF ARTIFICIAL INTELLIGENCE” Thoretical issues and practical developments" il 2 Marzo.
La confusione o l'incompetenza delle autorità e' veramente pericolosa.
Per esempio qualche tempo fa leggevo prospettare lo scenario in cui le assicurazioni automobilistiche non avrebbero piu' coperto gli automobilisti che guidassero personalmente la propria auto, in quanto l'intelligenza artificiale avrebbe guidato meglio.
a prima vista, non capivo che c'entra questo con il tema dell'articolo di Naughton. Poi rileggendolo ho visto che anche lui parla di guida autonoma. E mi pare che l'articolo non metta abbastanza in evidenza quanto quello che lui chiama "political security" sia diverso dagli altri due domini. Dovrò vedere se lo fa il rapporto completo di cui l'articolo è recensione. i problemi di "political security" nascono innanzitutto da CHI controlla l'AI impiegata. E vale sia in politica sia sul lavoro. Se i famosi braccialetti di Amazon non fossero connessi al computer centrale, non sarebbero un problema. Zeynep Tufekci questo l'ha riassunto benissimo dicendo "don't worry about AI. Worry about what **power** can do with AI" I problemi di sicurezza digitale e fisica, invece, molto spesso nascono da ragioni molto più basse, dalla stupidità ai bisogni indotti per vendere roba inutile. Le auto autonome, per esempio, non devono mica per forza essere connesse a Internet. Di sicuro non ha bisogno di esserlo il loro sistema di guida. Basta ricordarsi quello per far svanire l'attacco descritto nell'articolo. è come per i frigoriferi intelligenti. è l'idea iniziale che un frigo o qualsiasi altro elettrodomestico **debba** essere connesso a Internet che è cretina, e lo rimarrebbe anche se tutti i governi e le aziende del mondo fossero santi scesi in terra. Marco
E' piuttosto improbabile che questo avvenga.
Deve essere invece chiaro che le case automobilistiche saranno responsabili, civilmente e penalmente, delle morti avvenute durante il self drive. Le case automobilistiche, ovvero la proprietà (e forse anche gli azionisti), NON il software di simulazione statistica alla guida.
Giacomo
2018-02-25 9:21 GMT+01:00 Alberto Cammozzo <ac+nexa@zeromx.net>:
<https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligenc...>
[] , If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact – pace Mr Gibson – that lower-level but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics. This technology is a combination of machine learning and big data and it’s everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.
These corporations regard this version of “weak” AI as the biggest thing since sliced bread. The CEO of Google burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions. If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it’s legal, ethical and good for society – and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.
Fortunately there are scholars who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their 2017 report makes interesting reading. Last week saw the publication of more in the same vein – a new critique of the technology by 26 experts from six major universities, plus a number of independent thinktanks and NGOs.
Its title – The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation – says it all. The report fills a serious gap in our thinking about this stuff. We’ve heard the hype, corporate and governmental, about the wonderful things AI can supposedly do and we’ve begun to pay attention to the unintentional downsides of legitimate applications of the technology. [ ] _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
participants (3)
-
Alberto Cammozzo -
Giacomo Tesio -
M. Fioretti