Con il Deep Learning capita sempre piu’ spesso che non si capisca come AI arrivi a determinati risultati, ed anche nel campo delle traduzioni e’ “divertente” vedere come il motore impegnato nell’imparare una
lingua specifica in qualche modo riesca anche ad impararne altre anche molto diversa dal target.
Bell’articolo su ultimo MIT Tech Review che descrive gli sforzi per far venire “dubbi” e quindi incertezze al motore di AI.
From: nexa <nexa-bounces@server-nexa.polito.it>
On Behalf Of J.C. DE MARTIN
Sent: Thursday, March 8, 2018 11:21 AM
To: Nexa <nexa@server-nexa.polito.it>
Subject: [nexa] NYT: "Google Researchers Are Learning How Machines Learn"
Google Researchers Are Learning How Machines Learn
By CADE METZ
MARCH 6, 2018
SAN FRANCISCO — Machines are starting to learn tasks on their own. They are identifying faces, recognizing spoken words, reading medical scans and even carrying on their own conversations.
All this is done through so-called neural networks, which are complex computer algorithms that learn tasks by analyzing vast amounts of data. But these neural networks create a problem that scientists are trying to solve: It is not always easy to tell how the
machines arrive at their conclusions.
On Tuesday, a team at Google took a small step toward addressing this issue with the unveiling of new research that offers the rough outlines of technology that shows how the machines are arriving at their decisions.
[…]
Continua qui:
https://www.nytimes.com/2018/03/06/technology/google-artificial-intelligence.html