L’approccio filosofico e’ corretto, e tra l’altro il test di Cartesio e’ valido ancora oggi, ma forse serve un occhiata alle neuroscienze per capire il punto descritto sotto per Watson. Oggi i modelli piu’ sofisticati di AI arrivano a 16
milioni di neuroni, che son quello che ognuno di noi ha gia’ solo nella corteccia cerebrale, cui se ne aggiungono altri 70-80 a spasso per la scatola cranica, quindi non c’e’ paragone. La rana ha 16m di neuroni, come la miglior AI, che quindi e’ tontolona
se paragonata a noi.
Watson non da problemi nel riconoscimento di fratture, nei o altri dati da cui si ricavi l’anomalia come delta rispetto ad una norma, perche’ con la classificazione del dataset bene o male te la cavi. Ma lo stesso Watson rischia di prescriverti
una chirurgia toracica per quello che risolvi con un maloox, perche’ non puo’ minimamente “coordinare” le tante variabili da prendere in considerazione durante una diagnosi, che oltre a richiedere classificazione e’ anche processo ricorsivo con forte interazione
con il paziente (non a caso fare il veterinario o il pediatra e’ ben piu’ difficile). Qui sul processo di diagnosi:
https://www.ncbi.nlm.nih.gov/books/NBK338593/
From: nexa <nexa-bounces@server-nexa.polito.it> On Behalf Of
Federico Guerrini
Sent: Tuesday, June 30, 2020 2:27 AM
To: Center Nexa <nexa@server-nexa.polito.it>
Subject: [nexa] Why general artificial intelligence will not be realized
Ciao,
vi segnalo quest'interessante paper:
che adotta un approccio filosofico alla questione e prova
a confutare l'idea, diffusa fra l'altro da Harari su Homo Deus,
che il libero arbitrio sia solo una questione di stimoli biochimici.
Computers are not in our world. I have earlier said that neural networks need not be programmed, and therefore can handle tacit knowledge. However, it is simply not true,
as some of theadvocates of Big Data argue, that the data“speak for themselves”.Normally,
the data used are related to one or more models, they are selected by humans, and in the end they consist of numbers. If we think, for example like Harari, that the world is“at
thebottom”governed by algorithms, then we will have a tendency to overestimate the power of AI and underestimate
human accom-plishments. The expression“nothing but”that
appears in thequotation from Harari may lead to a serious oversimplification inthe description of human and social phenomena.
I think this is atleast a part of the explanation of the failure of both IBM WatsonHealth and Alphabet’s DeepMind.“IBM
has encountered a fundamental mismatch between the way machines learn and theway doctors work”(Strickland,2019) and
DeepMind has dis-covered that“what works for Go may not work for the challen-ging problems that DeepMind aspires
to solve with AI, like cancer and clean energy”(Marcus,2019).The over estimation of the power of AI may also have
detrimental effects on science. In their frequently quoted bookTheSecond Machine AgeErik Brynjolfson and Andrew McAfee argue that digitization can help us to understand the past. They refer toa project that analyzed more thanfive
million books published inEnglish since 1800. Some of the results from the project was that“the number of words in
English has increased by more than 70%between 1950 and 2000, that fame now comes to people more quickly than in the past but also fades faster, and that in the 20th century interest in evolution was declining until Watson andCrick discovered the structure
of DNA.”This allegedly leads to“better
understanding and prediction—in other words, of better science—via
digitization”(Brynjolfson and McAfee,2014, p. 69).In my opinion it is rather an illustration of Karl Popper’s
insight:“Too many dollars may chase too few ideas”(Popper,1981,p.
96).My conclusion is very simple: Hubert Dreyfus’argumentsagainst general AI are still valid.
--