Ricevo da Marco Pagani della Radiotelevisione Svizzara, e molto volentieri ritrasmetto
linformazione che RSI Rete Due, domani alle 17.10, nella trasmissione “Diderot" verrà
affrontato il caso Gebru con la partecipazione dell'antropologa S. Broadbent

La puntata verrà pubblicata probabilmente lunedì, visto che va in onda domani sera.
Il link generico alla trasmissione (dove apparirà anche la puntata in questione) è questo:
https://www.rsi.ch/play/radio/programma/diderot---le-voci-dellattualita-?id=11502374
 
Ecco labstract della trasmissione:
 

Discriminazione Artificiale

La ricercatrice Timnit Gebru, del team etico di Google per l'intelligenza artificiale, è stata licenziata. Anche se Google nega, sono in molti a ritenere questo licenziamento una conseguenza del suo lavoro per portare alla luce le discriminazioni e i pregiudizi che si possono individuare nel funzionamento delle Intelligenze Artificiali.

Ne vogliamo parlare con la scienziata sociale e antropologa Stefana Broadbent, docente di antropologia digitale che in passato ha lavorato al NESTA, l’agenzia inglese per l’innovazione, e ha diretto l'Osservatorio degli Usi Digitali di Swisscom.

Saluti

Diego

On 10 Dec 2020, at 12:00, nexa-request@server-nexa.polito.it wrote:

------------------------------

Message: 2
Date: Thu, 10 Dec 2020 00:28:51 +0100
From: Giacomo Tesio <giacomo@tesio.it>
To: nexa@server-nexa.polito.it
Subject: [nexa] Google's Ethics Effort Is Looking Rather Evil - By
Cathy O'Neil
Message-ID: <20201210002851.0000566c@tesio.it>
Content-Type: text/plain; charset=UTF-8

Google used to have a simple motto: Don?t be evil. Now, with the firing
of a data scientist whose job was to identify and mitigate the harm
that the company?s technology could do, it has yet again demonstrated
how far it has strayed from that laudable goal.

Timnit Gebru was one of Google?s brightest stars, part of a group hired
to work on accountability and ethics in artificial intelligence ? that
is, to develop fairness guidelines for the algorithms that increasingly
decide everything from who gets credit to who goes to prison. She was a
lead researcher on the Gender Shades project, which demonstrated the
flaws of facial recognition software developed by IBM, Amazon,
Microsoft and others, particularly in identifying dark female faces. As
far as I can ascertain, she was fired for doing her job: specifically,
for critically assessing models that allow computers to converse with
people ? an area in which Google is active.

Full disclosure: It?s hard for me to untangle my opinion on this from
my personal and professional loyalties. I?m not acquainted with Gebru,
but we have quite a few friends in common and I?ve admired her work for
some time. I signed a letter supporting her. I also run an independent
company that specializes in auditing algorithms for bias, so I have an
interest in getting big tech firms to use my services rather than do
their vetting in-house.

All that said, I genuinely believe that Gebru?s story illustrates a
broader issue: You can?t trust companies to check their own work,
particularly where the result might conflict with their financial
interests. My favorite example is Theranos, which insisted that its
research into a novel blood test was so amazing and valuable that it
couldn?t be shown to outsiders ? until it proved to be a dangerous
fraud. The warning applies no less to tech companies such as Google,
IBM, Microsoft and Facebook, which have created internal ethics groups
and external tools in an effort to display responsibility and keep
their algorithms unregulated.

I?ll admit that to some extent, I envy the people who work on the
accountability teams. They have fascinating jobs, with access to tons
of data that they?d never be able to play with in academia. At the same
time, though, they have little or no influence to push their employers
to actually implement the fairness frameworks that they so carefully
develop. Their scientific papers are often heavily edited or even
censored, as I learned when I once tried to co-author one (I quit the
project).

I often wondered about Gebru and others working at Google: How could
they stand the bureaucracy, or express their very real concerns in that
environment? As it turns out, they couldn?t.

Gebru, along with co-authors from academia as well as Google, was
trying to get the company?s approval to submit a paper on some
unintended consequences of large language models. One problem is that
their energy consumption and carbon footprint have been rapidly
expanding along with their use of computing power. Another is that,
after ingesting a large chunk of the entire history of all written
text, they?re troublingly likely to use nasty, racist or otherwise
inappropriate language.

The findings, while perfectly good and interesting, were not
particularly new. Which makes it all the more bizarre that someone
higher up at Google decided, with no explanation, that Gebru had to
back out of publishing the paper. When she demanded to know what the
actual complaints were so she could address them, she was fired (with
her boss informed only after the fact).

Aside from turning the paper viral, the incident offered a shocking
indication of how little Google can tolerate even mild pushback, and
how easily it can shed all pretense of scientific independence. The
fact that Gebru was one of the company?s only Black female researchers
makes it a particularly egregious example of punching down in the same
old tired way.

Embarrassing as this episode should be for Google ? the company?s CEO
has apologized ?  I?m hoping policy makers grasp the larger lesson. The
artificial intelligence that plays a growing role in our lives requires
outside scrutiny, from people who have the proper incentives to be
independent and the power to compel meaningful reform. Otherwise,
algorithms will be doomed to repeat and amplify the flaws of the humans
who made them.

Tratto da
https://www.bloomberg.com/opinion/articles/2020-12-09/google-s-firing-of-timnit-gebru-shows-it-can-t-stop-being-evil

Qui le "scuse" del CEO di Google
https://www.axios.com/sundar-pichai-memo-timnit-gebru-exit-18b0efb0-5bc3-41e6-ac28-2956732ed78b.html?utm_source=twitter&utm_medium=social&utm_campaign=organic&utm_content=1100


Giacomo




End of nexa Digest, Vol 140, Issue 25
*************************************

Dott. Diego Latella - Senior Researcher CNR-ISTI, Via Moruzzi 1, 56124 Pisa, Italy  (http:www.isti.cnr.it)
FM&&T Lab. (http://fmt.isti.cnr.it)
http://www1.isti.cnr.it/~latella/ - ph: +390506212982, mob: +39 348 8283101, fax: +390506212040
===================
I don't quite know whether it is especially computer science or its subdiscipline Artificial Intelligence that has such an enormous affection for euphemism. We speak so spectacularly and so readily of computer systems that understand, that see, decide, make judgments, and so on, without ourselves recognizing our own superficiality and immeasurable naivete with respect to these concepts. And, in the process of sospeaking, we anesthetise our ability to evaluate the quality of our work and, what is more important, to identify and become conscious of its end use.  […] One can't escape this state without asking, again and again: "What do I actually do? What is the final application and use of the products of my work?" and ultimately, "am I content or ashamed to have contributed to this use?"
-- Prof. Joseph Weizenbaum ["Not without us", ACM SIGCAS 16(2-3) 2–7, Aug. 1986]