Re: [nexa] Testo di Meredith Whittaker
Questa è la ragione per cui servirebbe un CERN for AI in Europa. Anche l’infrastruttura nazionale per l’AI proposta da Eric Schmidt tramite la National Security Commission on Artificial Intelligence Con lo scopo di “democratize AI”, lascia dei dubbi: In practice, then, these proposals to "democratize" access to AI research infrastructures amount to calls to subsidize tech giants further by licensing familiar infrastructure from these firms in ways that allow them to continue defining the terms and conditions of AI and AI research. — Beppe
On 30 Nov 2021, at 12:00, nexa-request@server-nexa.polito.it wrote:
Date: Mon, 29 Nov 2021 17:31:00 +0100 (CET) From: Antonio Casilli <antonio.casilli@telecom-paris.fr> To: nexa <nexa@server-nexa.polito.it> Subject: [nexa] Testo di Meredith Whittaker Message-ID: <764268093.2939040.1638203460934.JavaMail.zimbra@enst.fr> Content-Type: text/plain; charset=utf-8
Condivido questo articolo di Meredith Whittaker sulla cattura industriale della ricerca in IA, pubblicato nella rivista dell'ACM Interactions.
The steep cost of capture https://interactions.acm.org/archive/view/november-december-2021/the-steep-c...
"Big tech’s control over AI resources made universities and other institutions dependent on these companies, creating a web of conflicted relationships that threaten academic freedom and our ability to understand and regulate these corporate technologies."
(...)
"In addition to punishing dissent and denigrating research they find threatening, tech companies are working to co-opt and neutralize critique. They do this in part by funding and elevating their weakest critics, often institutions and coalitions that focus on so-called AI ethics, and frame issues of tech power and dominance as abstract governance questions that take the tech industry's current form as a given and AI's proliferation as inevitable. In parallel, tech firms also champion technocratic remedies such as "AI bias bounties" and fairness fixits that stage tech-enabled discrimination as a problem of bad code and "buggy" engineering [15]. Such approaches make great PR. They also serve to cast elite engineers as the arbiters of "bias," while structurally excluding scholars and advocates who don't have computer science training, but whose focus on the racialized power asymmetries and political economy of AI are essential for understanding and addressing AI harms."
(...)
"To begin, scholars, advocates, and policymakers who produce and rely on tech-critical work must confront and name the dynamic of tech capture, co-optation, and compromise head-on, and soon. This means incorporating reflexive critiques of the conditions and of knowledge creation, and the compromises and trade-offs faced by knowledge workers over whom interested institutions have power. Given the politics of collegial proximity that inform academic prestige networks while working to blur the lines between academic and industry workers, this is certain to be uncomfortable. But naming these dynamics is the only way to address them and to stage questions that allow us to envision and demand alternative futures."
-- Antonio A. Casilli Professor, Telecom Paris-Institut Polytechnique de Paris Member, Interdisciplinary Institute for Innovation (CNRS) Associate Member, LACI-IIAC (EHESS) Associate researcher, Weizenbaum-Institut, Berlin Member, Scholarly council UCLA Center for Critical Internet Inquiry (C2i2) Faculty Fellow, Nexa Center for Internet & Society
*We respect your right to disconnect. This email send time is due to my own workflow efficiency. You are in no obligation to take action or reply to it outside your office hours.*
In questo articolo, a pag.8, si trovano numeri che descrivono bene questa “cattura”: https://arxiv.org/pdf/2106.15590.pdf : il 58% delle affiliazioni degli autori dei più citati articoli in due prestigiose conferenze di ML viene dalle big tech, a cui va aggiunto un 28% da altre aziende, per un totale di 86%, che è indicativo di chi detta le direzioni. A ulteriore conferma di ciò, si legge nel libro "Redesigning AI” (aa.vv. Mit Press, 2021) "A handful of tech giants, all focused on algorithmic automation—Google (Alphabet), Facebook, Amazon, Microsoft, Netflix, Ali Baba, and Baidu—account for the majority of money spent on AI research. (According to a recent McKinsey report, they are responsible for about $20 to $30 billion of the $26 to $39 billion in total private AI investment expenditures worldwide.)" antonio Il giorno 30 nov 2021, alle ore 12:54, Giuseppe Attardi <attardi@di.unipi.it<mailto:attardi@di.unipi.it>> ha scritto: Questa è la ragione per cui servirebbe un CERN for AI in Europa. Anche l’infrastruttura nazionale per l’AI proposta da Eric Schmidt tramite la National Security Commission on Artificial Intelligence Con lo scopo di “democratize AI”, lascia dei dubbi: In practice, then, these proposals to "democratize" access to AI research infrastructures amount to calls to subsidize tech giants further by licensing familiar infrastructure from these firms in ways that allow them to continue defining the terms and conditions of AI and AI research. — Beppe On 30 Nov 2021, at 12:00, nexa-request@server-nexa.polito.it<mailto:nexa-request@server-nexa.polito.it> wrote: Date: Mon, 29 Nov 2021 17:31:00 +0100 (CET) From: Antonio Casilli <antonio.casilli@telecom-paris.fr<mailto:antonio.casilli@telecom-paris.fr>> To: nexa <nexa@server-nexa.polito.it<mailto:nexa@server-nexa.polito.it>> Subject: [nexa] Testo di Meredith Whittaker Message-ID: <764268093.2939040.1638203460934.JavaMail.zimbra@enst.fr<mailto:764268093.2939040.1638203460934.JavaMail.zimbra@enst.fr>> Content-Type: text/plain; charset=utf-8 Condivido questo articolo di Meredith Whittaker sulla cattura industriale della ricerca in IA, pubblicato nella rivista dell'ACM Interactions. The steep cost of capture https://interactions.acm.org/archive/view/november-december-2021/the-steep-c... "Big tech’s control over AI resources made universities and other institutions dependent on these companies, creating a web of conflicted relationships that threaten academic freedom and our ability to understand and regulate these corporate technologies." (...) "In addition to punishing dissent and denigrating research they find threatening, tech companies are working to co-opt and neutralize critique. They do this in part by funding and elevating their weakest critics, often institutions and coalitions that focus on so-called AI ethics, and frame issues of tech power and dominance as abstract governance questions that take the tech industry's current form as a given and AI's proliferation as inevitable. In parallel, tech firms also champion technocratic remedies such as "AI bias bounties" and fairness fixits that stage tech-enabled discrimination as a problem of bad code and "buggy" engineering [15]. Such approaches make great PR. They also serve to cast elite engineers as the arbiters of "bias," while structurally excluding scholars and advocates who don't have computer science training, but whose focus on the racialized power asymmetries and political economy of AI are essential for understanding and addressing AI harms." (...) "To begin, scholars, advocates, and policymakers who produce and rely on tech-critical work must confront and name the dynamic of tech capture, co-optation, and compromise head-on, and soon. This means incorporating reflexive critiques of the conditions and of knowledge creation, and the compromises and trade-offs faced by knowledge workers over whom interested institutions have power. Given the politics of collegial proximity that inform academic prestige networks while working to blur the lines between academic and industry workers, this is certain to be uncomfortable. But naming these dynamics is the only way to address them and to stage questions that allow us to envision and demand alternative futures." -- Antonio A. Casilli Professor, Telecom Paris-Institut Polytechnique de Paris Member, Interdisciplinary Institute for Innovation (CNRS) Associate Member, LACI-IIAC (EHESS) Associate researcher, Weizenbaum-Institut, Berlin Member, Scholarly council UCLA Center for Critical Internet Inquiry (C2i2) Faculty Fellow, Nexa Center for Internet & Society *We respect your right to disconnect. This email send time is due to my own workflow efficiency. You are in no obligation to take action or reply to it outside your office hours.* _______________________________________________ nexa mailing list nexa@server-nexa.polito.it<mailto:nexa@server-nexa.polito.it> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
Buongiorno, scusate se allargo forse un po' troppo il campo di discussione oltre "il digitale" e l'AI, ma non è utile fissarsi solo su un fenomeno /accidentale/ "Vetro' Antonio" <antonio.vetro@polito.it> writes:
In questo articolo, a pag.8, si trovano numeri che descrivono bene questa “cattura”: https://arxiv.org/pdf/2106.15590.pdf : il 58% delle affiliazioni degli autori dei più citati articoli in due prestigiose conferenze di ML viene dalle big tech, a cui va aggiunto un 28% da altre aziende, per un totale di 86%, che è indicativo di chi detta le direzioni.
Il fenomeno del corporate capture (aka lobbying) coinvolge non solo il diritto ma anche e soprattutto la (manipolazione della) scienza, come ampiamente documentato dal lavoro di https://corporateeurope.org/en Non è un nuovo fenomeno ma oggi è più evidente anche grazie al lavoro di alcuni filosofi della scienza e del diritto (meta scienziati e meta giuristi, compresi quelli che non si definiscono tali) che evidenziano lo stato di crisi permanente. Questo articolo, in uscita sul numero 135 (Gennaio 2022) della rivista Futures, fa un'analisi di alcuni casi: https://www.sciencedirect.com/science/article/pii/S0016328721001695 «Science, the endless frontier of regulatory capture» (licenza CC-BY 4.0) by Andrea Saltelli, Dorothy J.Dankel, Monica Di Fiore, Nina Holland, MartinPigeone --8<---------------cut here---------------start------------->8--- Highlights • Five recent cases of regulatory capture in Europe are investigated. • Important forms of corporate penetration are based on a strategic use of the image and legitimacy of science. • Lobbyists present themselves as upholders of science and of evidence-based policy to advance their agenda. • The strategy follows an ‘epistemic ladder’, from questioning the evidence to questioning its legitimacy, to acting as to create a worldview. Abstract In this paper we explore five recent cases of regulatory capture in Europe and zoom in on a form of corporate penetration which is based on a strategic use of the image and legitimacy of science. We examine cases in which lobbyists present themselves as upholders of science and of evidence-based policy, intervene directly in the methodological and ethical aspects of science for policy-making, thus imprinting their own agenda on the societal functions of science. We propose the existence of a process whereby private interest ascend an ideal ‘epistemic ladder’. In this vision, lobbying intervention moves from questioning the evidence to questioning its legitimacy, all the way to acting as to create a worldview where not only the evidence, but the very idea of regulation, become irrelevant or undesirable, other than as a vehicle for the pursuit of private interest. Caught in this project, science and its future appear vulnerable. --8<---------------cut here---------------end--------------->8--- In poche parole, la scienza è legittimata, quindi anche finanziata, /soprattutto/ quando veicolo per il perseguimento di interessi privati. [...] Auguri! 380° -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
Ciao Giovanni, Antonio e Nexa, On December 2, 2021 10:13:40 AM UTC, "380°" wrote:
In poche parole, la scienza è legittimata, quindi anche finanziata, /soprattutto/ quando veicolo per il perseguimento di interessi privati.
"L’informatica tuttavia non si limita alla creazione di automatismi riproducibili elettro-meccanicamente, ma attraverso di essi studia l’informazione, come possa essere trasferita, preservata, rappresentata, interpretata e trasformata, nonché le tecniche che applicano queste conoscenze." [1] Non ci dovrebbe dunque stupire che chi domina l'informatica contemporanea, domini anche la scienza (e la filosofia, e la politica e...), perché sa come e perché controllare l'informazione. Dal 2013 la FSFE [2] riceve fra il 10 ed il 20 percento del proprio budget direttamente da Google, e probabilmente di più del 20% direttamente o indirettamente dai BigTech USA [3]: https://fsfe.org/donate/thankgnus-2013.it.html https://fsfe.org/donate/thankgnus-2014.it.html https://fsfe.org/donate/thankgnus-2015.it.html https://fsfe.org/donate/thankgnus-2016.it.html https://fsfe.org/donate/thankgnus-2017.it.html https://fsfe.org/donate/thankgnus-2018.it.html https://fsfe.org/donate/thankgnus-2019.it.html https://fsfe.org/donate/thankgnus-2020.it.html https://fsfe.org/donate/thankgnus-2021.it.html Questa pluriennale e proficua collaborazione della Free Software Foundation EUROPE con i BigTech USA è culminata qualche tempo fa nella pubblicazione di questa richiesta di dimissioni del Board della FSF (quella vera, ovviamente): https://fsfe.org/news/2021/news-20210324-01.en.html Ricorderete certmamente il momento: l'attacco a RMS ed alla FSF [4] di cui tanto abbiamo parlato anche qui [5]. Anche Mozilla (che riceve centinaia di milioni di dollari ogni anno da Google) fu fra le prime ad aderire: https://github.com/rms-open-letter/rms-open-letter.github.io/commit/260b260a... insieme a Tor[6] e X.org Foundation [7]: cercate Google fra i finanziatori e non rimarrete delusi [8]. Insomma... chi controlla l'informatica, non controlla solo la scienza. Controlla ciò che ciascuno di noi vede, legge, dice, fa e pensa. E se pensate che esageri, chiedetevi _perché_ lo pensate. Cui prodest? Giacomo [1] http://www.tesio.it/2020/10/02/la_lotta_informatica_per_la_democrazia_cibern... [2] da non confondere con la FSF vera [3] che sostanzialmente controllano la Linux Foundation, ed includono IBM/RedHat, Amazon etc... [4] Ricorderete https://rms-open-letter.github.io/ [5] https://server-nexa.polito.it/pipermail/nexa/2021-March/020750.html [6] https://www.torproject.org/about/sponsors/ [7] https://www.freedesktop.org/wiki/#sponsors [8] quanto a "Open Source Diversity" https://opensourcediversity.org/ non trovo alcunché su come si finanzino, ma se siete interessati potete seguire questa issue: https://github.com/opensourcediversity/opensourcediversity.org/issues/135
participants (4)
-
380° -
Giacomo Tesio -
Giuseppe Attardi -
Vetro' Antonio