Grazie Daniela per questo riferimento.
La citazione ricordata da Ezra Klein nell'intervista a Gary Marcus, sul fatto che una cosa può essere falsa ma, a parte il resto, non inferiore alla cosa reale, mi ha fatto venire in mente l'articolo di Daniel Dennet sul fatto che generare finzione attraverso l'IA è un atto immorale come produrre denaro falso. https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/
Ciao, Enrico

------ Messaggio originale ------
Da "Daniela Tafani" <daniela.tafani@unipi.it>
A "J.C. DE MARTIN" <juancarlos.demartin@polito.it>; "Nexa" <nexa@server-nexa.polito.it>
Data 19/06/2024 13:15:55
Oggetto Re: [nexa] "ChatGPT is bullshit"

Molti altri - nessuno dei quali mi pare sia citato nell'articolo - hanno scritto di chatGPT come di un "bullshit generator", in senso tecnico, con riferimento a Frankfurt.
 
 
Anche Richard Stallman usa spesso l'espressione:
 
Per quanto ne so (ma non ho indagato), il primo è stato Ezra Klein, nel 2023, intervistando Gary Marcus:
 
ChatGPT and systems like it, what they’re going to do right now is they’re going to drive the cost of producing text and images and code and, soon enough, video and audio to basically zero. It’s all going to look and sound and read very, very convincing. That is what these systems are learning how to do. They are learning how to be convincing. They are learning how to sound and seem human.
 
But they have no actual idea what they are saying or doing. It is bullshit. And I don’t mean bullshit as slang. I mean it in the classic philosophical definition by Harry Frankfurt. It is content that has no real relationship to the truth.
 
[...]
 
EZRA KLEIN: Let’s sit on that word truthful for a minute because it gets to, I think, my motivation in the conversation. I’ve been interested — I’m not an A.I. professional the way you are, but I’ve been interested for a long time. I’ve had Sam on the show, had Brian Christian on the show. And I was surprised by my mix of sort of wonder and revulsion when I started using ChatGPT because it is a very, very cool program. And in many ways, I find that its answers are much better than Google for a lot of what I would ask it.
 
But I know enough about how it works to know that, as you were saying, truthfulness is not one of the dimensions of it. It’s synthesizing. It’s sort of copying. It’s pastiching. And I was trying to understand why I was so unnerved by it. And it got me thinking, have you ever read this great philosophy paper by Harry Frankfurt called “On Bullshit”?
 
GARY MARCUS: I know the paper.
 
EZRA KLEIN: So this is a — welcome to the podcast, everybody — this is a philosophy paper about what is bullshit. And he writes, quote, “The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”
 
And his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. He has this great line where he says that people telling the truth and people telling lies are playing the same game but on different teams. But bullshit just has no relationship, really, to the truth.
 
And what unnerved me a bit about ChatGPT was the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all. And I’m curious how you see that concern.
 
GARY MARCUS: It’s exactly right. These systems have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said and trying to maximize the probability of that. It’s just auto complete, and auto complete just gives you bullshit.
 
And it is a very serious problem. I just wrote an essay called something like “The Jurassic Park Moment for A.I.” And that Jurassic Park moment is exactly that. It’s when the price of bullshit reaches zero and people who want to spread misinformation, either politically or maybe just to make a buck, start doing that so prolifically that we can’t tell the difference anymore in what we see between truth and bullshit.
 
________________________________________
Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di J.C. DE MARTIN <juancarlos.demartin@polito.it>
Inviato: mercoledì 19 giugno 2024 10:51
A: Nexa
Oggetto: [nexa] "ChatGPT is bullshit"
 
ChatGPT is bullshit
 
Michael Townsen Hicks1 · James Humphries1 · Joe Slater1
 
Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce human- like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
 
 
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5
 
* Published 08 June 2024
 
 

-- EN

https://www.hoepli.it/libro/la-rivoluzione-informatica/9788896069516.html
======================================================
Prof. Enrico Nardelli
Past President di "Informatics Europe"
Direttore del Laboratorio Nazionale "Informatica e Scuola" del CINI
Dipartimento di Matematica - Universit� di Roma "Tor Vergata"
Via della Ricerca Scientifica snc - 00133 Roma
home page: https://www.mat.uniroma2.it/~nardelli
blog: https://link-and-think.blogspot.it/
tel: +39 06 7259.4204 fax: +39 06 7259.4699
mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it
online meeting: https://blue.meet.garr.it/b/enr-y7f-t0q-ont
======================================================

--