Send nexa mailing list submissions to
nexa@server-nexa.polito.it
To subscribe or unsubscribe via the World Wide Web, visit
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
or, via email, send a message with subject or body 'help' to
nexa-request@server-nexa.polito.it
You can reach the person managing the list at
nexa-owner@server-nexa.polito.it
When replying, please edit your Subject line so it is more specific
than "Re: Contents of nexa digest..."
Today's Topics:
1. Generative AI is not replacing jobs or hurting wages at all,
economists claim (Daniela Tafani)
2. Re: [Junk released by Allowed List] Qual è il senso...?
(maurizio lana)
----------------------------------------------------------------------
Message: 1
Date: Mon, 19 May 2025 07:37:37 +0000
From: Daniela Tafani <daniela.tafani@unipi.it>
To: "nexa@server-nexa.polito.it" <nexa@server-nexa.polito.it>
Subject: [nexa] Generative AI is not replacing jobs or hurting wages
at all, economists claim
Message-ID: <d722e75b06054ff8a300d0ca77220741@unipi.it>
Content-Type: text/plain; charset="Windows-1252"
Generative AI is not replacing jobs or hurting wages at all, economists claim
'When we look at the outcomes, it really has not moved the needle'
Thomas Claburn
Tue 29 Apr 2025
Instead of depressing wages or taking jobs, generative AI chatbots like ChatGPT, Claude, and Gemini have had almost no significant wage or labor impact so far – a finding that calls into question the huge capital expenditures required to create and run AI models.
In a working paper released earlier this month,
[Large Language Models, Small Labor Market Effects, <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933>]
economists Anders Humlum and Emilie Vestergaard looked at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024.
Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers.
Yet after Humlum, assistant professor of economics at the Booth School of Business, University of Chicago, and Vestergaard, a PhD student at the University of Copenhagen, analyzed the data, they found the labor and wage impact of chatbots to be minimal.
The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots.
The problem isn't that workers are avoiding generative AI chatbots - quite the contrary. But they simply aren't yet equating to actual economic benefits.
The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes.
While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction.
The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."
In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place.
"One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum.
He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts.
Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks.
"These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.
But overall, the time savings from using AI was less than expected. According to the study, "users report average time savings of just 2.8 percent of work hours" from using AI tools. That's a bit more than one hour per 40 hour work week.
The authors note that this finding differs from other randomized controlled trials that have found productivity benefits on the order of 15 percent. And they explain this discrepancy by saying that other studies have focused on occupations with high AI productivity potential and that real-world workers don't operate under the same conditions.
"So I think there are two key reasons why the real economic gains are lower than [the cited studies]," said Humlum, noting that his study relies on actual tax data.
"First, most tasks do not fall into that category where ChatGPT can just automate everything. And then second, we're in this middle phase where employers are still waking up to the new reality, and we're trying to figure out how to best really realize the potential in these tools. And just at this stage, it's just not been that much of a game changer."
Where there are productivity gains to be had, Humlum and Vestergaard estimate that only a small portion of that benefit – between 3 and 7 percent – gets passed through to workers in the form of higher earnings.
Humlum said while there are gains and time savings to be had, "there's definitely a question of who they really accrue to. And some of it could be the firms – we cannot directly look at firm profitability. Some of it could also just be that you save some time on existing tasks, but you're not really able to expand your output and therefore earn more.
"So it's like it saves you time writing emails. But if you cannot really take on more work or do something else that is really valuable, then that will put a damper on how much we should actually expect those time savings to affect your earning ability, your total hours, your wages."
Humlum said the impact of using AI chatbots, in the form of productivity, time savings, and work quality, can be improved through company commitment to internal education and evangelism. He pointed in particular to how firm initiatives can reduce the tool-usage gender gap – fewer women use these tools than men.
But doing so at this point doesn't show much promise of payoff.
"In terms of economic outcomes, when we're looking at hard metrics – in the administrative labor market data on earnings, wages – these tools have really not made a difference so far," said Humlum. "So I think that that puts in some sense an upper bound on what return we should expect from these tools, at least in the short run.
"My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
<https://www.theregister.com/2025/04/29/generative_ai_no_effect_jobs_wages/>
------------------------------
Message: 2
Date: Mon, 19 May 2025 10:08:47 +0200
From: maurizio lana <maurizio.lana@uniupo.it>
To: nexa@server-nexa.polito.it
Subject: Re: [nexa] [Junk released by Allowed List] Qual è il
senso...?
Message-ID: <0f1fdf21-1d21-4858-8530-46af7e70050f@uniupo.it>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
giusto oggi nel blog cultureIA (https://cultureia.hypotheses.org/) ho
visto la notizia di questo seminario tenuto il 28 gennaio di quest'anno:
/Artificial and Post-Artificial
Writing/(https://cultureia.hypotheses.org/3159)
disponibile in youtube (https://www.youtube.com/watch?v=wQKBxgiRIdA)
in cui la relatrice apre con questi pensieri:
"How do we read a text when we can no longer be sure that it was not
written by an AI?
With AI permeating writing tools and producing vast amounts of text, it
may then become impossible to distinguish artificial from natural text,
but also impossible to bear this constant uncertainty.
I speculate that we may thus eventually reach a new post-artificial
situation in which the standard expectation of text is replaced by an
agnostic stance in regard to its origins.
literary text which emphasizes a stronger notion of authorship may
resist this transition longer through deliberate linguistic
experimentation or by emphasizing a human maker.
By extrapolating from the current technological situation, the talk
attempts to think through some of the possible consequences of large
language models and related technologies for interpretive situations and
hermeneutic strategies at a time when we can no longer assume that any
reasonable, complex, and coherent text was written by a human."
la trascrizione che allego è prodotta con (mac)whisper.
Maurizio
Il 18/05/25 22:40, maurizio lana ha scritto:
> ciao Daniela,
> la questione in gioco - distinguere il testo prodotto da un chatbot da
> quello prodotto da un umano - ne ha sullo sfondo una seconda -
> individuare le caratteristiche che distinguono la scrittura di un
> umano da quella di un altro (usualmente chiamate "stile"). la seconda
> è al cuore degli studi attribuzione con metodi quantitativi che hanno
> una lunga storia (il resoconto più ampio e valido è Grieve, Jack
> William. «Quantitative authorship attribution: A history and an
> evaluation of techniques». Thesis, Department of Linguistics - Simon
> Fraser University, 2005. http://summit.sfu.ca/item/8840) e anche dei
> successi significativi.
> chi ha lavorato a studi di authorship attribution è portato in primo
> momento a pensare che se è possibile individuare le caratteristiche
> che distinguono la scrittura di un umano da quella di un altro, sia
> possibile anche individuare quelle che che distinguono la scrittura di
> un umano da quella di un chatbot.
>
> ma in effetti il tuo commento mostra un'impossibilità in linea di
> principio, perché nel chatbot non c'è una linea di pensiero ma un
> mosaico di pezzi di altri. e dunque i deboli tentativi dei software si
> collocano nell'individuazione di aspetti del tutto
> esteriori/estrinseci sui quali un ipotetico falsario testuale può
> benissimo e facilmente operare per modificarli/occultarli.
>
> peraltro vedendo già adesso circolare grandi quantità di non-testi ,
> ovviamente si pensa se ci sia un modo selezionarli e distinguerli
> senza doverci spendere tempo leggendoli - che francamente si vorrebbe
> poter spendere meglio il proprio tempo
>
> e quindi mi pare che almeno come comprensibile anche se irrealizzabile
> desiderio la questione rimanga viva.
> quella per cui da un lato dobbiamo distinguere, dall'altro si vorrebbe
> fare di melgio che leggere non-testi per poter dire che sono tali.
> Maurizio
>
> Il 18/05/25 00:24, Daniela Tafani ha scritto:
>> il senso mi pare quello di qualsiasi circonvenzione di incapace.
>> A questa collaborano studiosi di ogni sorta, a pagamento e non.
>>
>> Si oppone, per ora, la FTC:
>> <https://www.ftc.gov/news-events/news/press-releases/2025/04/ftc-order-requires-workado-back-artificial-intelligence-detection-claims>
>>
>> Per distinguere automaticamente il prodotto di un estrusore di stringhe di testo probabili da un testo scritto da un essere umano, i due testi dovrebbero avere qualcosa di diverso, che fosse rintracciabile da un software.
>>
>> La differenza principale tra i due testi è che uno è stato pensato da chi l'ha scritto e l'altro no, essendo un'immagine sfuocata di altri testi (quelli sì, pensati da qualcuno). Suppongo perciò che serva il pensiero, per riconoscere un testo estruso da un software che produca linguaggio senza pensiero rispetto a un testo prodotto da un essere pensante. Non esistono, oggi, software pensanti, perciò gli "AI detector" non esistono.
>>
>> Che ne pensi?
>>
>> Per avere una differenza rintracciabile automaticamente, dovresti marcare in qualche modo il testo sintetico, che però, immagino, potrà comunque essere copiato come solo testo.
>>
>> Grazie.
>> Un saluto,
>> Daniela
>> ________________________________________
>> Da: nexa<nexa-bounces@server-nexa.polito.it> per conto di Enrico Nardelli<nardelli@mat.uniroma2.it>
>> Inviato: sabato 17 maggio 2025 19:53
>> A:nexa@server-nexa.polito.it
>> Oggetto: [Junk released by Allowed List] [nexa] Qual è il senso...?
>>
>> ... di realizzare strumenti per la verifica che un testo sia stato generato dall'Intelligenza Artificiale, visto che (oltre a sbagliare clamorosamente su un mio testo scritto interamente da me che è stato attribuito per il 71% all'IA - ma l'ultima volta che mi sono controllato mi son trovato sempre fatto della solita materia organica... 😂) poi la stessa società offre uno strumento per "umanizzare" il testo?
>>
>> Io ho provato GPTZero, giusto per fare nome e cognome, ma penso che altre startup simili mettano a disposizione prodotti simili.
>>
>> Mi sa tanto di un gigantesco gioco delle 3 carte...
>
> ------------------------------------------------------------------------
> many of us believe the EU remains
> the most extraordinary, ambitious, liberal
> political alliance in recorded history.
> where it needs reform, where it needs to evolve,
> we should be there to help turn that heavy wheel
> Ian McEwan, The Guardian, 2/6/2017
> ------------------------------------------------------------------------
> Maurizio Lana
> Università del Piemonte Orientale
> Dipartimento di Studi Umanistici
> Piazza Roma 36 - 13100 Vercelli
------------------------------------------------------------------------
l’imbarazzante senso di colpa per la nostra vita
che va avanti coi suoi momenti di gioia e svago
mentre quella di altri non ne ha
S. Vizio
------------------------------------------------------------------------
Maurizio Lana
Università del Piemonte Orientale
Dipartimento di Studi Umanistici
Piazza Roma 36 - 13100 Vercelli
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://server-nexa.polito.it/pipermail/nexa/attachments/20250519/9ce47e5e/attachment.htm>
-------------- next part --------------
Hello everyone, hello, hello, I'm very happy to meet you virtually.
So this seminar is titled on artificial and post-artificial text, machine learning and the reader's expectations of literary and non-literary writing.
And today's presenter is Annes Beja, an assistant professor of German at the University of Berkeley of California.
And his research focuses on the history of German philosophy in the 20th century, political theory and theories of the digital and AI.
So Beja's academic texts have appeared in configurations, poetics today and a new German critic among others.
And his most recent work is digital literature.
So I am for wrong pronunciation.
I don't I don't speak German.
So with Simon Roloff published in Hamburg in 2024.
And Beja is also active as a writer of digital literature.
And also one of his most important works is the novel Berlin, Miami, published in Berlin in 2023, which has which was co-written with the self-trained large language model.
So we are very happy to to listen to your presentation.
Thank you again for participating.
And now the floor is yours.
Thank you.
Thank you so much.
Thanks for the invitation.
Thanks for the introduction and for the opportunity to speak.
So, yeah, my my presentation is called an artificial and post artificial text.
And I want to talk about the the the effects on the reception side of the production of the synthetic production of text.
So it's been an eventful two years since Chats GPT was published in November 2022.
And although the technology underlying this large language model has been around since 2017, Chats GPT was really the first time an LLM has been easily accessible not only to the privileged few, but to the public at large.
Now, in other words, it was able for everyone to experience and to get used to neural network based text synthesis to emulate human writing.
No matter how one gauges the perils or merits of large language models, it seems likely that they herald in an era in which the text we encounter may be entirely generated by a machine.
Practitioners of the hermeneutics disciplines then are called to consider what impact the current rapid advances in machine learning research might have on the way in which we understand and interpret text.
So in this talk, I want to contribute to an answer by rephrasing and thus maybe delimiting the question a little bit.
What will be the impact of artificial writing on the readers' expectations of unknown text?
I therefore turn to the standard reception of generative AI's output and I shall answer two facets of this question.
First, what happens when we are confronted with artificial text alongside natural ones, those who are written in the traditional way?
How, in other words, do we read a text when we can no longer be sure that it was not written by an AI?
And second, what direction might this increasing doubt as to a text's origin take if at some likely point the distinction between natural and artificial itself becomes obsolete, so that we can no longer even seek to differentiate and instead read post-artificial text?
I will discuss the impact of AI-generated text on the reader's expectations in three steps.
First, I trace its origins back to early computerized text experiments of the 1950s and 60s, when the assumption that humans are the originators of text becomes apparent because for the very first time an alternative to it exists.
I suggest that AI development has implicitly used this standard assumption of a text's human origins by at least in principle aiming to deceive users into thinking they are interacting with humans.
A second phase then is reached as AI text generation advances, producing increasingly naturalistic outputs, and readers move from simply assuming a human behind a text to actively doubting the text's origin.
With AI permeating writing tools and producing vast amounts of text, it may then become impossible to distinguish artificial from natural text, but also impossible to bear this constant uncertainty.
I speculate that we may thus eventually reach a new post-artificial situation in which the standard expectation of text is replaced by an agnostic stance in regard to its origins.
Such text, I argue, would tend to be read as authorless by default.
Finally, I discuss how literary text which emphasizes a stronger notion of authorship may resist this transition longer through deliberate linguistic experimentation or by emphasizing a human maker.
By extrapolating from the current technological situation, the talk attempts to think through some of the possible consequences of large language models and related technologies for interpretive situations and hermeneutic strategies at a time when we can no longer assume that any reasonable, complex, and coherent text was written by a human.
Now, that there is a standard expectation that readers have when confronted with unknown text at all, namely that they are "natural", written by humans, becomes observable only once there is an alternative to it, when they are also artificial texts.
The distinction between natural and artificial text is not mine.
It was introduced by a German philosopher and author, Max Benze, in his 1962 essay on natural and artificial poetry.
While arguably too strict, probably no text is purely natural or artificial, it provides a coherent articulation of the difference between human and computer-written text, and it allows one to see where this difference starts to collapse in the present.
In his essay, Benze considers how non-intentional, computer-generated literature differs from intentional literature written by humans.
He focuses on the mode of creation behind these texts, what happens when an author writes a poetic text.
For Benze, this is clear in the case of natural poetry.
For a text to have meaning, it must also be linked to the world via a personal poetic consciousness, as he writes.
For Benze, language is largely determined by an ego relation and a world aspect.
Speech emanates from a person, no matter what they say, that person is always the one speaking.
At the same time, in their speech, the speaker always refers to the world.
Poetic consciousness then puts being into science, that is, the world into text, and ultimately guarantees that one is related to the other.
Without this consciousness, Benze holds, the science and the relationship between them would be meaningless.
For a computer, words are only empty symbols, operative variables that are devoid of intrinsic meaning.
It is precisely this case that Benze's second category, artificial poetry, describes.
By this he means literary texts that are produced through the execution of a rule, an algorithm.
In them, there is no longer any consciousness and no reference to an ego or to the world.
If these texts mean anything, then only accidentally so and only for a human reader.
Instead, such texts have purely material origins.
They can be described only in terms of mathematical properties such as word frequency, distribution, degree of entropy, and so on.
The subject of an artificially generated text, then, even if the words should happen to designate things in the world for us, is no longer actually the world, but only that text itself.
It is the measurable, calculable, schematic object of an exact textual science.
If natural poetry, then, originates in the realm of understanding, artificial poetry is a matter of mathematics.
It does not want to and cannot communicate, and it does not speak of a shared human world.
Benze's thrust, however, is not to rescue a romantic idea of an inexplicable human creative power by setting it off against Soler's computer.
On the contrary, the author as genius is dead and buried here.
Instead, Benze wants to know what can still be said aesthetically about a text if one disregards traditional categories such as meaning, connotation, or reference.
The answer he presents in his Information Aesthetics, which is strictly positivist, and the tradition of Shannon and Weaver's communication theory, considers only statistically measurable textual properties.
Artificial poetry, then, precisely because it is meaningless, is also pure poetry.
It gets by entirely without the assumption of an underlying consciousness and is an independent aesthetic object that can be investigated imminently.
Benze himself was involved in several experiments with artificial poetry.
The most famous of these was certainly the stochastic texts, which his student Theo Lutz produced on the Zuse Z22 mainframe computer at the University of Stuttgart in 1959, in which can be considered the first German-language experiment with digital literature.
These texts are stochastic because they are randomly selected and assembled from a given collection of vocab words.
The fact that these words are taken from Franz Kafka's "castle" hardly makes the output any more substantive.
It includes phrases like "not every castle is old", "not every day is old", or "not every tower is large", or "not every look is free".
Lutz then printed selections in Benze's literary magazine Augenblick in 1959.
The stochastic texts, as I said, were one of the first examples of natural language processing in Germany, and they proved that computers could operate not only on mathematical symbols but also on language.
They were also artificial poetry in Benze's sense.
No matter how many variations the program churns out, there is no ego expressing itself and no consciousness standing behind it all, vouching for the meaning of the words, which are merely concatenated according to weighted random operations.
That the computer itself could actually be the author of this text seemed absurd to both Lutz and Benze, but both knew how it had been produced.
Whether its artificial origin can be recognized is less clear.
The readers of the magazine Augenblick were not compelled to ask this question, and an accompanying essay enlightened them to the details of its creation.
So they knew it was computer-generated.
But when the following year Lutz generated a second poem according to the same pattern, it was titled "And No Angel Is Beautiful" or "Kein Engel ist schön", and instead of Kafka had used Christmas vocabulary, and he published it in the December issue of the youth magazine Yard 9, there was no explanation to be found.
The poem was placed on page 3 along the miscellanea, just like any other poem.
Only the author's name, Elektronos, might have allowed one to guess who or what was behind the text.
The next issue solved what most readers had not even identified as a riddle, namely that a computer had written the poem.
Clearly Lutz was having fun here, as is evident from the ironic captions under the photo of the author, the Z22, and the second poem in the poet's handwriting, as Lutz wrote, that is a teletype printout.
On the same page he published a series of letters to the editor.
The writers, without knowing how it had come about, were quite divided in their assessment of the poem.
"Perhaps you should reconsider whether you want to open the columns of your paper to such modern poems", poets complained one, while another was on the contrary impressed by the avant-garde stance.
"Finally, something modern."
A third reader was at least open-minded.
"To be honest, I don't understand your Christmas poem, but somehow I like it anyway.
One has the impression that there is something behind it."
Evident in these reactions is what I would call the reader's standard expectation of unknown texts.
The electronist poem was indeed artificial poetry in Bent's definition, a computer-generated text without meaning, mediated by an authorial consciousness.
But because its readers were unaware of the conditions of its production, they took it for a natural text and assumed it was written by a human with the aim of communicating meaning.
The standard expectation of unknown texts, then, can be captured as a relationship between two elements, which sometimes is extended by a third.
First, that the text has an originator, namely that a human, or sometimes more than one human, wrote it.
And second, that the text has intentional and semantic content, that a communicative will to meaning, sometimes understood as reference to a world, is expressed in it.
In some cases, there is also, third, the text's connection to an author function, that this constellation can and should be subsumed under the name of an author which organizes the attribution and circulation of some text, but not all.
But as central as authorship is for literary studies, I believe the first two elements are more important in this context, and I will therefore foreground them.
I take pains at separating these three elements because they are not identical.
The first element, the human, for instance, does not by necessity imply the second, the intent to communicate, say, in the case of Dionysian writing frenzies.
Nor does the second by necessity imply the third, the author, since an author is more than just one intending a text.
Further, both intent and authorship can be uncoupled from the human originator, because historically we know of texts that were read as not having been brought about, or authored solely by humans, such as holy books, that express a divine will, even though the instance recording is usually human.
I think that Moses's tablets are one of the few exceptions.
And while the intentional element may be internally divided in the debate about the relationship between meaning reference and communicative intent, a big topic in AI text at the moment, most of these debates ignore that the assumed originator is more than just a nondescript subject, but often, possibly, mostly, specifically understood as a human.
One should at least ask whether the assumption of intent alone is enough for readers to designate something like a computer as an author, since it is often precisely the simulation of humanness that acts as the assurance of intent in the first place.
I will come back to this point.
Important for now is that a reader's standard expectation of unknown text is at minimum this, that it was written by a human who wants to say something.
That there is a standard expectation at all, even if it may be limited to modernity or Western modernity, has only become apparent since there has been an alternative to it, and Benz's conceptual distinction and Lutz's practical demonstration have been illustrative here.
As the incensed letters to the editor show, to recognize a text as violating the standard, that is, as artificial, always requires additional information, if it is not provided a human origin is assumed.
In this regard, Lutz had indeed given his readers the runaround, as one letter to the editor complained, not because a modern poet had written bad but natural poetry, but because a computer had generated meaningless because artificial text.
Passing of an artificial text as a natural one was not just the debut of a rather hackneyed joke made by the computer scientist in a provincial youth magazine in the 1960s.
On the contrary, this runaround is the principle of artificial intelligence, and at the same time that which connects language technologies with an imputed humanness as an element of the standard expectation.
Ten years earlier, in an article that became the founding document of artificial intelligence, the computer science pioneer Alan Turing had pondered whether computers could ever be intelligent.
Turing famously rejected this question as wrongly posed.
Intelligence as an intrinsic quality could not be reliably measured.
In good behaviorist fashion, he therefore replaced the question with another.
If we assume that intelligence is a property of humans, then all we need to find out is when humans would consider the computer to be itself human and thus intelligent.
Note that Turing at no point here speaks of intent, a category he finds meaningless.
The experiment setup is well known.
Three participants, a human judge, a human respondent and a computer, communicate solely through text via a teletype printer, with the judge's task being to determine which respondent is the machine based on their answers.
If the judge cannot reliably distinguish between the human and the machine, the machine is considered to have demonstrated human-like intelligence in that context.
Of course, the original setup has a gender component that I'll pass over here.
The point is not that the answers in this conversation have to be correct, but that they sound human.
Lying and bluffing are explicitly allowed.
If one wants to examine the reader's expectations of artificial text, Turing's test is still a helpful starting point.
First, because his setup assumes a teletype conversation and thus equates intelligence with written communication.
And second, because the goal of this communication is to misrepresent signs that are meaningless to the machine as meaningful to humans.
To put it bluntly, the essence of AI is to pass off artificial texts as natural ones.
It is only worthwhile to make this attempt at all, however, because the standard expectation of unknown text is that of human origin.
Artificial intelligence, then, as a project, if not in each of its actual instances, is based on the principle of deception from the start.
For this reason, media scholar Simone Natale writes, "Deception is as central to AI's functioning as the circuits, software and data that make it run."
The goal of AI research, she says, is "the creation not of intelligent beings, but of technologies that humans perceive as intelligent."
With an eye to Turing, one might add, as close as possible to being human.
I would like to call this position strong deception.
Yet because this conception of AI is essentially deceitful, we can ask whether the expectations of a text could ever change under these conditions.
I think not.
It insists that artificial and natural texts remain neatly separated so that one can be mistaken for the other.
If it is suddenly revealed that a natural text is in fact an artificial one, its readers will feel cheated, and not without reason.
Deception here turns into disappointment.
Deception turns into disappointment.
We don't know how Theo Lutz's readers reacted to the revelation that the computer had written the poem, but one can guess if one considers recent cases in which the artist subsequently turned out to be a machine.
This prominently happened in June 2022 at a rather peripheral art prize.
When a participant admitted that he had not painted his entry himself, but that it had been generated by the text-to-image AI Dalí, a torrent of indignation followed, and he was accused of fraud.
Even though this was an art prize for digital art, this apparently referred only to the tools.
The artist himself was still supposed to be human.
Again, it is the originator, not the presumed intent, that was at issue here.
A somewhat similar case occurred in Japan, where the author Riku Dan, winner of the 2024 Akutawaga Prize for Literature, disclosed in an interview that she had generated portions of a prize-winning novel using Charged GPT.
Again, public scorn followed, and she was accused of fraud.
There are other such examples, and although these disappointed expectations are usually exaggerated in the press, they reveal what was actually expected, namely natural, not artificial, text.
However, neither the neat separation of Benz's ideal types, nor this expectation itself can remain intact as soon as the exception becomes the rule, that is, as soon as we are surrounded by text whose origin is unclear.
At first glance, such examples seem to suggest that the reader's expectations of unknown text have actually not changed since Lutz's time.
We assume human origins and communicative intent, which is why deception can be a useful strategy in AI design in the first place.
But in fact, I believe that expectations are nevertheless already in the process of shifting, and this has become even clearer in the last two years.
Because the number of computer-generated text is constantly increasing, and because we ourselves are writing ever more with, alongside and through language technologies, we are on the way to a new expectation, or rather a new doubt.
The more artificial text there is, the more the standard dissolves and the question of the origins must arise, even when we normally would not think about it at all.
That there nevertheless has been a shift can be explained by the fact that the examples of text I have considered so far are special ones, they are literary texts, texts that are marked as exceptional in our cultural tradition.
They appear to be intended and worked through to the smallest detail, and they, more than other texts, are read as having an author who is human, who wants to say something.
And yes, this is still true even after 60 years of talk of the death of the author.
This also has been a lesson of the last few years.
It is worth taking a look at the other side of the spectrum first, however, at those rather unmarked automated texts that remain in the background, that are merely functional and that do not assert themselves as products of a human intent or a strong notion of authorship.
For them, the Turing test is simply a false description of reality, and the standard expectation is already in the process of fraying.
For there are forms of human-machine interaction other than strong deception and other text types than the artificial-natural partition would suggest.
Especially when engaging with interfaces, we are likely to find ourselves in an intermediate stage between natural and artificial.
Here already, we can experience a looming shift in the standard expectation.
It is quite possible to know that something has been produced by a non-intelligent machine, at the same time treated as if it were conscious communication.
In fact, this is quite normal.
Natale has proposed the term "banal deception" for this phenomenon.
In contrast to what I have called "strong deception", here users are aware that they are being deceived.
We understand that Siri is not human and does not have an inner knife, but smooth communication with her works only if we treat her, at least to some extent, as if she had one.
Knowing this is not a contradiction that suddenly and unexpectedly destroys an illusion, as in the examples of competitions in which an AI participates surreptitiously.
Instead, a banal deception becomes a condition of functionality.
If I do not play along, Siri will just not do what I want.
The situation is very similar with written text.
It starts with a dialog box on the computer screen like this.
After all, the question "Do you want to save your changes?"
enables an interaction that is basically similar to the one with a human being.
The answer "yes" has a different effect than the answer "no", and both lie on a continuum of meaning that connects natural language with data processing, without one suspecting any meaningfully strong notions of intent behind it.
Something like this would already lower the expectations of unmarked text.
While we still act as if we expect human meaning and a conscious interest in communication, we bracket the conviction that there really must be such things involved.
This bracketing means discarding the third element, namely authorship, while entertaining the second element, intent, in an assertive modality, and possibly the first, human origin, in a fictional one.
Yet this bracket does not always proceed smoothly.
Banal deception is an "as if" that demands of us the ability to hold a conviction and its opposite simultaneously.
This self-contradictory position quickly gives rise to a doubt.
The more convincing artificial texts become, and the more the aesthetic impression they make on us suggests something like human-like intent, the more difficult does it become to feel comfortable in the limbo into which banal deception lures us.
If artificial texts become indistinguishable from natural ones, and if moreover we know that computers are capable of writing them, a new standard expectation of unknown texts lies before us.
It is the doubt about their origin.
Rather than taking a human source for granted, or simply deferring the question, the first thing we would want to know about a text would be, was it made by a human or a machine?
With the increasing integration of large language models into existing software, Google Docs incorporates Gemini-based assistants now, Microsoft Word uses Chat GPT, it becomes increasingly difficult for readers to clearly classify such texts as either human-made or machine-generated.
The stakes seem relatively low when it comes to LLM-written marketing pros, but what about the lawyer's letter that might be automatically generated even though it's about my own personal case?
What about my students' essays that I have to grade?
What about political articles or fake news stories?
What about the private, personal, intimate email, the love letter?
Are those AI products too, in whole or in part?
At least one reason for the discomfort these ideas evoke is that people have a stake in what they write and to varying degrees, they vouch for their words.
While scholars of literature have learned to read without an eye to what an author wants to say, and merely regard the semiotic interplay of signifiers, this is still the mode in which most everyone else reads any written document.
Even if a text ultimately turns out to be inaccurate or misleading, the standard expectation that a recipient brings to reading it involves the assumption that the author is making what Jürgen Habermas has called "validity claims", among which the validity claim to truthfulness or sincerity is maybe the most relevant in this context.
Essentially, it means that we have a basic level of trust that speakers or writers mean what they say rather than try to deceive.
This is the reason that reading critically has to be learned at all.
Whether or not readers ultimately judge a text's assertions to be true, they tend to assume the existence of a writer who does.
That's the difference between truthfulness and truth.
If truthfulness is thrown into calamity once large language models can generate texts that appear to have been produced and sanctioned by an author, the same can be said of truth, that is, correctness, another of Habermas's validity claims made in speech acts.
We know that LLMs are still lacking when it comes to the handling of knowledge, understood as reporting pre-established facts or correct data, if knowledge is simply the probability distribution of tokens over training data.
They may merely learn the form of a specific genre without any scientific insight, responsibility or accountability.
Among many examples of LLM-made scientific papers, I still find the Chan-GPT-generated legal brief most striking, which referred to non-existent laws and cases filed by a Manhattan lawyer in 2023, which would have fooled anyone not familiar with the law in question.
And even them, as this lawyer aptly demonstrated.
The text-pocalypse, then, as Matthew Kirschenbaum is calling it, the proliferation of generated text, is also a crisis of truth and truthfulness.
To maintain the standard expectation, with its separation of natural and artificial text and its assumption of human origin to tend to end for especially marked text authorship, is a challenge under such circumstances, to say the least.
Given the crisis of the standard expectation, it is not unreasonable to suggest that it is already shifting from the conviction that a human is behind a text to the doubt of whether it might not be a machine after all.
But this would also make the distinction between natural and artificial text increasingly obsolete.
We would then possibly enter a phase of post-artificial text.
By this I mean two related but distinct phenomena.
On the one hand, post-artificial refers to the increasing blending and blurring of natural and artificial text.
Of course, even before large language models, no text was truly natural.
Not only can the mathematical distribution of characters on the page, as Benzer had in mind, also be achieved by hand, but it is the truism of media studies that every writing tool, from the quill to the pen to the word processor, leaves its mark on what it produces.
On the other hand, no text is ever completely artificial.
That would require real autonomy and actually strong AI that could ultimately decide for itself to declare a text published.
Today, however, with AI language technologies penetrating every nook and cranny of our writing processes, a new quality of blending has been achieved.
To an unprecedented and almost indissoluble degree, we are integrating artificial text with natural text.
In the wake of large language models, it is not implausible that the two types of text might enter into a mutually dependent circular process that irreversibly entangles them.
Since a language model learns by being trained on large amounts of text, so far, more text always means better performance.
Thinking this through to the end, a future monumental language will, in the extreme case, have been trained with all available language.
According to one study, this may happen already in the next few years.
Let's call it the last model.
Every artificial text generated with this last model would then also have been created on the basis of every natural text.
At this point, all language history must grind to a halt, as the natural linguistic resources for model training would have been exhausted.
This is to Ouroboros' problem.
LLMs trained on synthetic text, the only text for it to swallow, suffer from sudden degeneration, a phenomenon called model collapse or autophagy.
More importantly, the language standard thus attained would in turn have an effect on human speakers again.
It would have the status of a binding norm, integrated into all the mechanisms of writing that build on this technology, and which would be statistically almost impossible to escape.
Any linguistic innovation, any new word, and every grammatical quirk that might occur in human language would have such a small sham in the training data of future models, that it would be averaged out and leave virtually no trace.
This is of course a deliberately exaggerated scenario.
As a thought experiment, however, it shows what post-artificial text might be in the extreme case.
But even before that happens, halfway to the eschaton of absolute blending or erasure of natural and artificial language, a new standard expectation of unknown text might already emerge.
This is the other meaning of post-artificial and the one I am primarily concerned with here.
After first the tacit assumption of the human origin of a text, and second the doubt about its origin, it would be the third expectation of unknown text.
For doubt about the origin of a text, like any doubt, cannot be permanent.
Humans have an interest in establishing normalcy and reducing complexity and uncertainty to tolerable levels.
Already mechanisms are put into place to keep the doubt in check, either by assurance or by decree, by digital certificates, watermarks or other security techniques designed to increase confidence that the text at hand is not just plausible nonsense or by laws.
However, to quote Hegel, one bare assurance is worth just as much as another, and the law does not abolish the crime.
The fact is that technical checks can always be circumvented, and there is currently no surefire way to detect AI-generated text.
In fact, there are good reasons to believe that none is possible in principle.
What this means, however, is that once it is possible to question whether a text might have been generated by a machine rather than written by a human, no certificate or law can extinguish that doubt.
If it can neither be resolved nor, as I believe, borne permanently, the only solution is to undo its premises.
Should political regulation and technical containment fail, then, it is not unlikely that the standard expectation itself will become post-artificial.
This is the second use of the term.
Instead of suspecting a human behind a text, or being haunted by scepticism as to whether it was not a machine after all, we simply lose interest in the question.
We might then focus only on what the text says, not who wrote it.
Post-artificial text would be agnostic about the origin.
If the standard expectation of unknown text is shifting, if it is increasingly riddled with doubt, perhaps even capitulating to such an agnostic position, why the ostentatious excitement over generated text in literary competitions?
This is, I think, because literature is slower than other forms of text.
And this is because, beneath the notwithstanding of all text types, it makes the most emphatic claim to a willful human origin, all the while connecting it more forcefully than any other type of text to the historically grown notion of authorship.
I have already said that there are texts today whose origins do not pose a question.
They are unmarked.
A street sign has no author.
In our daily life, a news site's weather forecast is also practically authorless.
Until now, however, we have always assumed that the human being is behind it.
But under post-artificial reading conditions, nothing much changes if we simply make no assumptions at all.
My belief is that more and more texts will soon be received this way.
Put differently, the zone of unmarked texts is expanding.
Not only street signs, but also maybe blog entries, not only weather forecasts, but also information brochures, discussions of Netflix series, and even entire newspaper articles will tend to be unmarked.
And it is not unlikely that they too become writerless and in many cases authorless.
Literary texts, on the other hand, are still maximally marked today.
We read them very differently than other types of text.
Among other things, we continue to assume that they have not only a human writer, but also an author.
The consequence of this markedness is that art and literature themselves have recently become the target of the tech industry, namely as a benchmark to be used after other formerly purely human domains such as games like chess or Go have been cracked.
Now art and literature pose the latest yardstick.
Probably nothing would prove the performance of AI models better than a convincingly generated novel.
Ultimately, however, this hope is still based on the paradigm of strong deception.
Indeed, there is currently a whole spate of literary and artistic Turing tests to be observed that all ask, can subjects distinguish the real image from the artificial one, the real poem from the AI-generated one?
These tests mostly come from computer science, which as an engineering discipline likes to have metrics to measure the success of its tasks.
The problem is that they still compare the rigid difference between natural expectation and artificial reality.
This seems to me of little use when it's this difference itself that is at issue.
More interesting then is the question of the circumstances under which this difference becomes irrelevant.
In other words, what would have to happen for literature to become post-artificial?
I will close by briefly trying to sketch an answer to this question and by returning to the standardisation tendency that arises from the run towards the last model.
In them, a normalisation takes place.
The outputs are most convincing precisely when they spew out what is expected, what is average, what is statistically probable.
The more ordinary a writing task, the more easily it can be accomplished by AI language technologies.
And just as marketing AIs now assist in the creation of marketing pros that meets our expectations for marketing pros, there also are literature AIs that assist in writing predictable literature, as one might say.
Predictability, as that what can be expected, may be described statistically as a probability distribution over a set of elements or, on a higher level, a set of patterns.
The more recurrent they are, the more likely and expectable the outcome.
One popular prediction, and one that seems plausible to me, is that genre literature, which is virtually defined by the recurrence of certain elements, is particularly suitable for AI generation.
A banal famous case is that of Jennifer Lepp, who writes fantasy novels under the pseudonym Leanne Leeds, like at an assembly line, one every 49 days.
In this process, she is aided by the program Pseudowrite, a GPT-based literary writing assistant that continues dialogues, adds descriptions, rewrites entire paragraphs, and even provides feedback on human writing.
The quality of this output is quite high, insofar as its contents are just expectable.
Since most idiosyncrasies are averaged out in the mass of training data, they tend toward a conventional treatment of language within the bounds of a certain genre.
They become ouroborous literature themselves, maybe.
At the moment, machine learning is not enough to generate entire novels, but I do not see why just this kind of literature could not be produced in an almost fully automated way very soon.
Maybe aided by human acting as an art director or series editor, reducing the 49 days to 49 minutes or even less.
If the prediction is allowed, I think it would be this kind of literature that is most likely to become post-artificial.
Of course, author names would not disappear, but they would function more as brands, representing a particular tested style.
Just like some books series today are written entirely by committee, but which we still assume to be a collaboration between humans like Tom Clancy or something like this.
The unmarked zone would extend to certain areas of literature, not all, and certainly not all narrative ones, but far more than it encompasses today.
Conversely, we might ask what kind of literature is most likely to escape this expansion.
Here I see two answers that at first glance seem contradictory.
If the unmarked post-artificial literature is one that absolutely mixes natural and artificial text, then writing that clings to this marking would be one that emphasizes their separation.
On the one hand, then, one could imagine the emphasis on human origins as a special feature.
Again, ex negativo, we can already observe phenomena that point to such a development.
On the web, for instance, artists have been up in arms against image-generating AI, such as Dali or stable diffusion.
They recognize stylistic features of their own work in the generated output, which may therefore have been part of the training set.
This raises very legitimate questions about copyright and fair compensation, a discussion that is unresolved and ongoing.
At the same time, however, there is also resistance to AR generated per se, which some fear threatens to make human artists obsolete.
On social media, the hashtag #SupportHumanArtists has emerged as a declaration of war against generative image AI.
One can imagine something similar for literature.
Perhaps even a future in which the label "guaranteed human-made" is considered to be a distinction.
Just as one buys handmade goods on Etsy, one can imagine a kind of boutique writing that boasts of its human origin as a proof of quality and as a selling point.
Such re-humanization could boost certain genres of literature, such as the now popular auto-fiction.
Playing with the identity of author and narrator, auto-fiction insists on the human origins of a text.
The same goes for experience-based non-fiction, such as the memoir or the personal essay.
They too make the human behind the text, the condition of its reception, and are thus especially suited to confronting the post-artificial situation.
But if one does not want to rely solely on an external assurance of human origins, which in any case still leaves room for doubt, which is in principle impossible to dispel, an unpredictable, unconventional use of language can indicate writing beyond the model's abilities.
Every formal experiment, every linguistic subversion, would oppose the homogenizing probability of great language models, their leveling Ouroboros standard.
Linguistic unpredictability would then be evidence of human origin.
In the most extreme case, the sign system in which language AIs operate would be exploded, as in the case of visual and isamic literature, like that of Kristen Miller.
She no longer uses any letters at all, but only the impression of lines and blocks of text.
The pure poetry Max Benze dreamed of would paradoxically not come from the machine, which now, in post-artificial blending, plausibly simulates meaning, but from people who no longer attempt to make sense at all.
Thus understood, and this is the second route of highlighting a text as non-AI made, the descendants of Lutz and Benze at least have a chance of escaping the post-artificial situation by continuing to mark the artificiality of their products.
This is digital literature, literature that is self-reflexively produced with the help of computers.
It can escape the post-artificial, at least to some degree, by consciously emphasizing the entanglement between the natural and the artificial, rather than glossing it over for a natural-seeming appearance.
Much more than conventional writing, digital literature always keeps a critical eye on its origins.
One example was Mathias Kuhn's German-language book Selbstgespräche mit einer KI, or Monologues with an AI, in which, in addition to his literary experiments, Kuhn also provides the source code for training the language model and even its training data, which is a very small sliver of text.
The human and machine components that together produce text can, not completely, but at least somewhat, be separated here.
Conversely, a deliberately staged human-machine collaboration can also have this analytic effect.
In David J.
Johnson's rewrites from as early as 2017, the author trained the language model every night for a year and then edited the output by hand the next morning, in a process he calls "calming".
The point at which the machine hands over its text to the human jarv is precisely marked, and by collecting the edited results of each month in a book, so that rewrites now comprise 12 heavy volumes, he also frames this collaborative, but not absolutely fused process as a performance, which is also not conventionally literary.
Of course, no proof of human intervention is ultimately provided here either.
But perhaps the obstacles that can still be put in the way of the all-too-smooth reception process is the maximum of resistance to the post-artificial that will still be possible, before the difference between natural and artificial has disappeared altogether.
It should have become clear that I have entered highly speculative territory here.
I am not suggesting that narrative or, broadly speaking, conventional literature is now doomed, or that only experimental and explicitly digital literature is worth pursuing.
Nor do I mean to imply that post-artificial texts are necessarily bad.
One can certainly enjoy reading them, discuss their merits, and unravel their interpretive dimensions.
Here, I have been primarily interested in analysing tendencies.
And for this purpose, it is worthwhile to consider possible extremes.
As far as literature is concerned, there is of course a third possibility, that AI may, through some technological innovation, be steered to produce less probable and more interesting output, without losing all the advantages that the power of normalisation provides in coherence and meaning production.
It is a matter of optimism, or maybe lack thereof, to consider this a likely or unlikely future.
Seeing that sufficiently large LLMs are still tied to capital interests, I remain apprehensive.
In this talk, I wanted to think about how language is changing in the technical age we inhabit today, and which will continue to unfold, both without fearing the technology, but also refusing to succumb to its ideologies.
In that context, one thing seems certain to me.
With the increased penetration of language technologies, with the triumph of AI models, our expectations as readers will change.
Thank you.
Thank you so much, Hannes.
That's a very stimulating, disturbing perspective.
I am currently beginning a book on what I would call the second death of the author, after the first death of the author, claimed by the structuralism area, when they decided that because there were so many texts under a new one, the history of language and the history of literature was speaking instead of the actual author.
I would be very interested to read the paper you certainly have about it and to quote it in my research.
I have many questions.
As you know, I am a scholar in literature.
I have a book on the history of the idea of literature, so it's very disturbing.
Because when you have literature, as you said, you have a lot of values coming together, like authenticity, like expressivity, and maybe not just the value of the truth of what is said, but all that comes with the idea of having someone communicating with you.
There is a very famous quote by Blaise Pascal saying, "You are looking for an author, but in fact you are looking for someone, for a man."
Maybe behind the abstraction of authorship, what we are looking in a book is a kind of authenticity of experience in comparison to what we got in electronic literature.
For instance, in France, about 50% of the books published are written in the first person, because they claim authenticity, they claim to give an account of oneself.
To quote Judith Butler, something that is very situated, that is anchored in something that is ground in some experience of life.
So I think there will be a lot of resistance to completely artificial authorship, but I don't want to monopolize, and I give the floor to everyone, because what you propose is really stimulating.
It is the same case in the case of photography, because we no longer presume that photography is a kind of reality, like Roland Barthes wanted to say, when you get photography, you get a piece of reality.
It's not true anymore, so what happens in photography is a kind of post-artificial reading position.
Maybe I can directly answer this, because I think this is an interesting question of photography, CGI, post-truth, the treachery of images and so on.
But I believe this is exactly not the same case with post-artificial writing, because in the case of computer-generated images and photography, we are talking about the depiction of reality or the representation of reality is a question.
A text is not a representation of reality in the same way that a photograph, which comes through a lens, is an index of an outside world and then is received by some kind of surface.
So I think the difference here may be one between noises and noema, in the Rossellian sense, between the object of consciousness and the way of giving this to consciousness.
In other words, in a photograph, the noema, the object, is at issue.
It could be false or true.
And in post-artificiality, I think it's rather the givenness itself, namely whether this is an expression of the content of a human mind or not.
And I think that's a difference in structure to me, because when we write something down, we don't represent reality.
We can represent all kinds of things.
It can be fiction.
But the way it is given through the mediation of a human author has this mark or the origin of the human inscribed in that.
So I would just take pains to see what the differences between these two media are, because I think they are somehow different.
I understand.
I understand.
What does the public think about all of it?
Let's listen to possible questions.
I had a remark, maybe.
Can you hear me?
Yes, absolutely.
Yes, thank you for your very stimulating presentation.
I was thinking while listening about maybe other examples of not text, but stories where authorship doesn't matter.
And I thought about two or three examples.
First, maybe stories for children, where they don't seem to put any importance on whether who is the author and what was the intention of the author.
It's more a matter of whether the stories works or not, whether they are grasped and they feel immersed in the story or not.
So that was the first example.
I was thinking also about stories in oral societies where they are passed from generations to generations with small variations, but where there is no clear author and in the same way, there is no real importance on any intention of someone on any subjectivity, but rather if it works or something else is at stake, I would say, like related to traditions and relation to the environment and so on.
Where even you could have non-human authors.
I was thinking about the Australian tradition where the author of the dreams is more the land and the characters are part of the land.
And it's not a human subjectivity.
So all these examples made me think maybe we are overestimating the importance of authorship because we are scholars and educated people who put a strong emphasis on the context of the text.
And what's the subjectivity that has been expressed or adults in search of shared subjectivities.
But all these examples maybe put another emphasis.
So I just wanted to suggest these examples and see how you react.
I think these are great.
This is a very good idea.
And stories for children seems to me the one that makes the most sense.
Because oral societies can have authors in kind of a mythical way.
The whole idea of Homer being a person might be a retro projection from a society that needs authors, but he was clearly already a name in circulation.
Although myth would be another category.
But myth is something that is in circulation not simply because there is no author but because there is no beginning to it.
No temporal beginning in time to myth.
I like the idea of children's stories particularly because that distinction that authorship introduces between reality and fiction, believability or something like this, doesn't seem to play a role.
And maybe one could then make a polemical argument that there is a certain type of infantilization coming with the idea of not having an author.
I wouldn't necessarily believe that because there is a distinction between the possibility of verification, if authorship is some kind of legitimation verification of the truth of what is said.
Post-artificiality does not think that we can completely do away with it, but it has no other.
It is kind of a giving up.
It is a situation in which this is not possible, so we have to search for other strategies of dealing with that.
Which would be slightly different than the situation of a children's novel where this is never a question.
The verification of the truth of the matter is not an issue at all.
So I would think that there might still be a difference between those.
But I do like the reminder that these exist and that there are modes of reading that obviously don't need authors or don't need truth to that degree.
Maybe to elaborate the interesting aspect in children's books is the efficiency of the story.
And I was wondering, do we have examples of stories that are very efficient in a way that they are compelling and people want to read what's next and would have been artificially produced?
I don't see any reason why it would not be possible to have compelling stories with no authors.
So where the focus would be more on the… The best word I can find is efficiency.
I'm not sure it's the best.
The efficiency of the story, like something intrinsic in the structure that makes it compelling.
I know, I mean, as I said, I don't think there is a fully, really fully computer generated novel that exists that would fulfill this criterion.
But a couple of years, I think by now ago, there was a slight scandal about the circulation of computer generated children's cartoons on YouTube.
There was like a whole, like a wave of these, which are to a degree almost completely incoherent.
They are cut together from different bits of… are generated in a way that, you know, if an adult would watch them, it just wouldn't make sense to them.
But apparently, because a lot of parents put their, you know, give their child an iPad and just play the lists of all, you know, Peppa Pig or whatever it's called, going through.
This was apparently quite successful because, you know, it was watched a lot.
I mean, we can think what that does to children if this kind of incoherence is the basis of learning how stories work.
But clearly, it seems that as a genre, there's something else to it than simply efficiency or compellingness.
It can also simply be the draw of it going on and on and on.
It might be the medium of the cartoon or the film rather than text that has to be read by a parent or something like this.
But all I wanted to say is, especially for children's media, there seems to be AI generated stuff, which is worrying, I believe.
I also have a question, if I may.
Yes, of course.
I totally get your points on photography and how it is different from LLM's.
And I agree with you.
However, forgive me, my question is going to be related to photography.
And it's regarding when you talked about deception and disappointment.
I'm very interested in the affective relationship we have with tools.
And I was thinking, because when I was a bachelor student, I remember this story one teacher told us.
And I actually never found a decent book about that.
So I would be happy to find it one day.
But I remember she said that actually, photography wasn't taken seriously by military in the 19th century.
And actually, photography served as a propaganda more to illustrate the battlefields.
And so it wasn't a representation of reality.
It wasn't something in which the military trusted.
They trusted more the drawings than the photography.
Today, it would seem very crazy to say that.
But they had time to adapt and to actually choose the photography instead of the drawing.
And maybe we had the same story with drones.
I'm not sure.
I don't know anything about military history.
But it made me think also of the relationship we have nowadays with Google.
And how sometimes people say, "Oh, I googled this."
So it's true because I googled this.
Don't you think one day the effective relationship, what I mean by that is the trust we have in the tool, might change from deception to full trust?
This is a very optimistic vision of it.
Yeah.
That's a good question.
I think there would be another possibility to think through that thought experiment.
I'm not sure, but it seems to me that there are reasons why we give trust and don't.
I'm not a scholar of photography, but I believe the original way it was described by Niepce, Fox Talbot and people like this is that it is an artless art.
It is one in which human intervention no longer plays a role.
One way to say this was that it's the pencil of nature.
The pencil of nature is drawing the photograph.
I think that the direct indexical relationship between what is out there and what is on the paper was the starting point.
And then it became more convoluted because then people noticed that while there might be truth to this, obviously photographs can lie.
Even before they are manipulated, you can just simply propaganda the use of photographs and so on.
I'm not sure if we have the same starting point for this parallel.
Because the whole point of large language models is the statistical production of language, which is from the start not concerned with truth or factual correctness.
So to get to a position of trust to this again when you start from non-trust, which is the reverse of photography, where you start with trust and then you have to learn to not trust it.
I wonder what would have to happen for this.
And I think what would have to happen is all kinds of mechanisms of verification, giving credence to the statements, double checking things.
One thing people are interested in now is this RAG technology, the idea that you can plug a large language model next to a database.
And the idea is that the language model would formulate the question, translate the question into a search query in SQL or some other kind of database retrieval language.
And then it could retrieve something that is by form as a database, somewhat verifiable and repeatable.
This might be one way to go, but I still think we have big problems with this.
This is why you always see under Cloud, I used the new version of Acrobat Reader the other day, which now is an AI function.
You can ask it questions about the PDF you're reading.
It always says, please double check.
And I don't know if we can get away from this, because all the technological precedent I can think of in which we start with non-trust and go to trust.
One example would be Wikipedia, where when I was in high school, everybody stressed that you cannot trust it.
And it's not a source.
And I think we're not there anymore.
And this was achieved through basically the implementation of very complex policies and structures and processes to ensure verifiability, veracity and so on.
So obviously everyone is working on this, right?
All AI companies have a vested interest in making the technology more trustworthy.
But I'm not convinced that this is possible in the long run.
Obviously, this whole argument that once a doubt is in the world, you cannot extinguish it anymore is one made from extremes.
We will find probably some kind of modus operandi that allows us to work with these things.
But really, the trust in the strong case, I don't know, seems unlikely to be reinstated in some way.
Thank you very much.
Can I ask a question?
Yes.
So I'm Yanis, I am a computer vision researcher and I have done a bit of early AI art before ChessGPT when there was GPT-2, the first LLM that kind of worked.
And there was a version that a lot of artists used that was called the SteelGPT-2 that was very easy to install in a normal computer.
And I remember we made the book and I gave it to a French person and the way that this French person reacted was, this is not French.
So he said basically that we had French people on the book.
I didn't know well French at the time, but the book was edited by French people.
So it was French in a way, but he reacted basically in a subjective thing.
So he reacted on the originality on the book.
So the book had a certain subjectivity and early models had subjectivity.
They gave this thing, this feeling that it was a person.
But what they lacked is that they lacked a certain community.
They gave you something that you didn't really want or they didn't want you to make a community with them.
And with models now, we have another thing which is a bit like this subjectification.
So basically right now we have something that reminds us of the writing that comes from computer science, which is a very different type of writing.
That is the writing that comes from the humanities where you have to totally destroy your subject.
Whoever writes that should be the same.
Maybe some types of writing should be better in the sense of how you transmit information, but your subject should kind of vanish.
And in both cases, we have a form of a non-psychological form of the machine in the sense of also Baudrillard that a machine has no other.
So the machine either is very narcissistic, very into itself.
In the other case, it's just a tool, something completely that operates on a certain task.
And the question there arises on it's very clear that these models right now are not made for writers.
They are not made for people that write.
They are made to write.
So it's and they are not created with writers.
But the very good question, which is not the case, for example, with AI art that has to do with making images is that writing is very accessible.
Everybody can write.
Everybody can learn to write.
We can argue on how much this is true, but it's much more accessible, let's say, than doing a painting or doing other things.
And the question is, do writers really need that?
And the way that writers are integrated in this pipeline, because it feels like something that is disconnected from writing and writers can integrate in their pipeline or not, but is a bit disconnected.
And what is your take on that?
What is the relation of this tool with writing?
Because it feels like it's it feels like people that use this tool mostly don't write and don't want to write.
And they use this tool to write or they don't want to spend time writing because it's a topic, of course.
But so what is it?
I think this is true because this is also how it is being marketed.
Right.
You you can it's usually JGPT is even though there's a lot of buzz about the notion of creativity, that is not the that is not the use case.
The use case is the production of the increasing effectivity.
Right.
And if you if you write in your job, you have to write legal briefs.
That's why that lawyer is interesting.
It makes it easier, but it also makes more perilous because you might it might hallucinate.
So, yes, it's not made for for for writers.
And it is it is kind of a subsystem that can be implemented in all kinds of professional writing tasks in which writing is not a.
Is it communicates a certain.
Basic functions like, you know, if you write professional letters or something like this, this is not it doesn't draw it.
It doesn't have a poetic.
It doesn't have a poetic dimension.
It doesn't draw attention to itself.
However, someone like, you know, what I what I what I showed you, Leanne Leeds and the software pseudowrite are meant to augment writing.
And this is interesting because it's not a complete generation, but it is a process of increasing the production of text also from a from an efficiency standpoint, but for writers in mind.
So what you can do in pseudowrite is you can ask it to suggest the start, the start of the paragraph.
You can ask it to rewrite a given paragraph.
You can ask it to, you know, the little buttons for different sensory modes.
So you can say you can select a paragraph and ask, add more visual description, add more auditory description, add more metaphors, add more, you know, and so on and keep writing.
So there is there is kind of the idea that even the process of writing literature is one that can be streamlined to efficiency by giving you these little shortcuts.
And this does become, I think, relevant as soon as writing is not understood as an art form or some kind of production of inherent value, but rather as a as a product that that can be sold.
And I think in that, Lien leads is a very good example because exactly the production of more texts in shorter times that are sold then to an audience that that wants these relatively predictable texts that basically are very similar to each other is simply an economic argument.
And I think the distinction between true writing that has all these kind of idealistic assumptions and writing as a profession in which you simply have to produce text in order to live is maybe one that's also from our perspective is not so tenable.
And maybe the idea of the sociology of writing that takes these kind of subsistence questions and economic questions more into account would be helpful here to see that is maybe not immune to this.
But in a way, you cannot say that someone has to be a novelist.
You can say that someone has to write a legal document, but you cannot say that someone has it's never except if I don't know your father or your ancestors put you a lot of weight on because of their name or whatever.
But nobody really has to be a novelist.
It's definitely something that should come or comes.
But what you say kind of reminds me of ghostwriting.
So all these people that are paid to not put their name and kind of so it's kind of automation of this already, let's say, mechanical Turk automated writing work.
But it doesn't seem like it's a tool that because the thing that really lacks is this creation of audience.
And unfortunately, writing and a lot of art is in a way, anonymous.
It kind of has to do with it only makes sense for making communities for audience for the writer themselves.
It's kind of a very psychological need.
It's not like a material need in the sense of solving a problem or solving.
It's a material need in the psychological sense, but not in how to say like, let's say, yeah, it has to do with humans.
Yeah, maybe.
But as soon as these communities exist, and as soon as a precedent for a certain text type exists, and especially as soon as there is a market for it, and there is a market for specific segments, right?
I mean, if you look at the Amazon bestseller charts, they don't have like high literature, but they have a lot of easily consumable mass produced literature.
And that is something where you can choose to get into this career.
I completely agree, it's probably not a good idea.
And it's to write literature in order to earn money.
But it's still possible.
And I think for that, the automation argument is really one of efficiency increasing, particularly because and I think this maybe comes back to this Urboro's argument, we do not start from zero, right?
This transition into a different technical reality is on top of what we already have.
So we have those communities of readers, we have those precedents of style, and what a specific type of text is supposed to look like.
And from there on, we suddenly have this kind of increasing acceleration of automation.
And that maybe is this what produces this odd situation in the first place.
I would agree that maybe this wouldn't be possible if we started at zero, but we don't, right?
Yes.
Anyway, thank you so much.
Oh, and the other thing I wanted to say, there's this field of critical eye studies emerging at the moment.
So how can we read these texts?
How can we kind of situate them in their social, economic, political context, and so on?
And something that I hear over and over again, and I think it's worth keeping that in mind, Fabien Offert has said that probabilistic systems require a probabilistic way of reading.
In other words, the single output of a model, be it gRGBT or GBT2 or whatever, doesn't really tell us much about that model.
Because a single sample is not enough to give you an idea of what this model is capable of because it is probabilistic.
This is different with something like Lutz and the kind of purely rule-based systems because there, if you have two outputs, you can analyze them and see what the structure of this is.
But the breadth and variety of possible outputs just from one or two samples doesn't cut it in gRGBT.
So I would always say what we as professional readers have to do is to develop a sense of how one can read these things, given that a single product is not enough to really form an opinion of their capabilities.
And this is a very practical question.
Thank you so much.
I think if there is no more questions, it's time to thank you so much.
I hope to meet you in person because I would like to have a full-length discussion with you.
Thank you.
Did you write some papers on which you based your presentation today?
Could you share it?
Yes, this is published.
Actually, it came out in the same issue in Poetics Today as yours.
Okay, it is in Poetics Today.
Because I have yet to read the full issue.
So thank you for the reference and have a good day in California and next time in Paris.
Bye-bye.
Thank you so much.
Bye.
Bye.
Bye.
Merci à tous.
------------------------------
Subject: Digest Footer
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
------------------------------
End of nexa Digest, Vol 193, Issue 29
*************************************