Re: [nexa] nexa Digest, Vol 193, Issue 29
Confermo l'assenza di effetti significativi dell'IA ma attenzione alla distribuzione, allego un lavoro recente dove analizziamo l'impatto occupazionale in Europa A presto Dario Il giorno lun 19 mag 2025 alle ore 10:09 <nexa-request@server-nexa.polito.it> ha scritto:
Send nexa mailing list submissions to nexa@server-nexa.polito.it
To subscribe or unsubscribe via the World Wide Web, visit https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa or, via email, send a message with subject or body 'help' to nexa-request@server-nexa.polito.it
You can reach the person managing the list at nexa-owner@server-nexa.polito.it
When replying, please edit your Subject line so it is more specific than "Re: Contents of nexa digest..."
Today's Topics:
1. Generative AI is not replacing jobs or hurting wages at all, economists claim (Daniela Tafani) 2. Re: [Junk released by Allowed List] Qual è il senso...? (maurizio lana)
----------------------------------------------------------------------
Message: 1 Date: Mon, 19 May 2025 07:37:37 +0000 From: Daniela Tafani <daniela.tafani@unipi.it> To: "nexa@server-nexa.polito.it" <nexa@server-nexa.polito.it> Subject: [nexa] Generative AI is not replacing jobs or hurting wages at all, economists claim Message-ID: <d722e75b06054ff8a300d0ca77220741@unipi.it> Content-Type: text/plain; charset="Windows-1252"
Generative AI is not replacing jobs or hurting wages at all, economists claim
'When we look at the outcomes, it really has not moved the needle'
Thomas Claburn Tue 29 Apr 2025
Instead of depressing wages or taking jobs, generative AI chatbots like ChatGPT, Claude, and Gemini have had almost no significant wage or labor impact so far – a finding that calls into question the huge capital expenditures required to create and run AI models.
In a working paper released earlier this month, [Large Language Models, Small Labor Market Effects, < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933>] economists Anders Humlum and Emilie Vestergaard looked at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024.
Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers.
Yet after Humlum, assistant professor of economics at the Booth School of Business, University of Chicago, and Vestergaard, a PhD student at the University of Copenhagen, analyzed the data, they found the labor and wage impact of chatbots to be minimal.
The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots.
The problem isn't that workers are avoiding generative AI chatbots - quite the contrary. But they simply aren't yet equating to actual economic benefits.
The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes.
While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction.
The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."
In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place.
"One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum.
He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts.
Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks.
"These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.
But overall, the time savings from using AI was less than expected. According to the study, "users report average time savings of just 2.8 percent of work hours" from using AI tools. That's a bit more than one hour per 40 hour work week.
The authors note that this finding differs from other randomized controlled trials that have found productivity benefits on the order of 15 percent. And they explain this discrepancy by saying that other studies have focused on occupations with high AI productivity potential and that real-world workers don't operate under the same conditions.
"So I think there are two key reasons why the real economic gains are lower than [the cited studies]," said Humlum, noting that his study relies on actual tax data.
"First, most tasks do not fall into that category where ChatGPT can just automate everything. And then second, we're in this middle phase where employers are still waking up to the new reality, and we're trying to figure out how to best really realize the potential in these tools. And just at this stage, it's just not been that much of a game changer."
Where there are productivity gains to be had, Humlum and Vestergaard estimate that only a small portion of that benefit – between 3 and 7 percent – gets passed through to workers in the form of higher earnings.
Humlum said while there are gains and time savings to be had, "there's definitely a question of who they really accrue to. And some of it could be the firms – we cannot directly look at firm profitability. Some of it could also just be that you save some time on existing tasks, but you're not really able to expand your output and therefore earn more.
"So it's like it saves you time writing emails. But if you cannot really take on more work or do something else that is really valuable, then that will put a damper on how much we should actually expect those time savings to affect your earning ability, your total hours, your wages."
Humlum said the impact of using AI chatbots, in the form of productivity, time savings, and work quality, can be improved through company commitment to internal education and evangelism. He pointed in particular to how firm initiatives can reduce the tool-usage gender gap – fewer women use these tools than men.
But doing so at this point doesn't show much promise of payoff.
"In terms of economic outcomes, when we're looking at hard metrics – in the administrative labor market data on earnings, wages – these tools have really not made a difference so far," said Humlum. "So I think that that puts in some sense an upper bound on what return we should expect from these tools, at least in the short run.
"My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
< https://www.theregister.com/2025/04/29/generative_ai_no_effect_jobs_wages/
------------------------------
Message: 2 Date: Mon, 19 May 2025 10:08:47 +0200 From: maurizio lana <maurizio.lana@uniupo.it> To: nexa@server-nexa.polito.it Subject: Re: [nexa] [Junk released by Allowed List] Qual è il senso...? Message-ID: <0f1fdf21-1d21-4858-8530-46af7e70050f@uniupo.it> Content-Type: text/plain; charset="utf-8"; Format="flowed"
giusto oggi nel blog cultureIA (https://cultureia.hypotheses.org/) ho visto la notizia di questo seminario tenuto il 28 gennaio di quest'anno: /Artificial and Post-Artificial Writing/(https://cultureia.hypotheses.org/3159) disponibile in youtube (https://www.youtube.com/watch?v=wQKBxgiRIdA) in cui la relatrice apre con questi pensieri: "How do we read a text when we can no longer be sure that it was not written by an AI? With AI permeating writing tools and producing vast amounts of text, it may then become impossible to distinguish artificial from natural text, but also impossible to bear this constant uncertainty. I speculate that we may thus eventually reach a new post-artificial situation in which the standard expectation of text is replaced by an agnostic stance in regard to its origins. literary text which emphasizes a stronger notion of authorship may resist this transition longer through deliberate linguistic experimentation or by emphasizing a human maker. By extrapolating from the current technological situation, the talk attempts to think through some of the possible consequences of large language models and related technologies for interpretive situations and hermeneutic strategies at a time when we can no longer assume that any reasonable, complex, and coherent text was written by a human." la trascrizione che allego è prodotta con (mac)whisper. Maurizio
Il 18/05/25 22:40, maurizio lana ha scritto:
ciao Daniela, la questione in gioco - distinguere il testo prodotto da un chatbot da quello prodotto da un umano - ne ha sullo sfondo una seconda - individuare le caratteristiche che distinguono la scrittura di un umano da quella di un altro (usualmente chiamate "stile"). la seconda è al cuore degli studi attribuzione con metodi quantitativi che hanno una lunga storia (il resoconto più ampio e valido è Grieve, Jack William. «Quantitative authorship attribution: A history and an evaluation of techniques». Thesis, Department of Linguistics - Simon Fraser University, 2005. http://summit.sfu.ca/item/8840) e anche dei successi significativi. chi ha lavorato a studi di authorship attribution è portato in primo momento a pensare che se è possibile individuare le caratteristiche che distinguono la scrittura di un umano da quella di un altro, sia possibile anche individuare quelle che che distinguono la scrittura di un umano da quella di un chatbot.
ma in effetti il tuo commento mostra un'impossibilità in linea di principio, perché nel chatbot non c'è una linea di pensiero ma un mosaico di pezzi di altri. e dunque i deboli tentativi dei software si collocano nell'individuazione di aspetti del tutto esteriori/estrinseci sui quali un ipotetico falsario testuale può benissimo e facilmente operare per modificarli/occultarli.
peraltro vedendo già adesso circolare grandi quantità di non-testi , ovviamente si pensa se ci sia un modo selezionarli e distinguerli senza doverci spendere tempo leggendoli - che francamente si vorrebbe poter spendere meglio il proprio tempo
e quindi mi pare che almeno come comprensibile anche se irrealizzabile desiderio la questione rimanga viva. quella per cui da un lato dobbiamo distinguere, dall'altro si vorrebbe fare di melgio che leggere non-testi per poter dire che sono tali. Maurizio
Il 18/05/25 00:24, Daniela Tafani ha scritto:
il senso mi pare quello di qualsiasi circonvenzione di incapace. A questa collaborano studiosi di ogni sorta, a pagamento e non.
Si oppone, per ora, la FTC: < https://www.ftc.gov/news-events/news/press-releases/2025/04/ftc-order-requir...
Per distinguere automaticamente il prodotto di un estrusore di stringhe
di testo probabili da un testo scritto da un essere umano, i due testi dovrebbero avere qualcosa di diverso, che fosse rintracciabile da un software.
La differenza principale tra i due testi è che uno è stato pensato da
chi l'ha scritto e l'altro no, essendo un'immagine sfuocata di altri testi (quelli sì, pensati da qualcuno). Suppongo perciò che serva il pensiero, per riconoscere un testo estruso da un software che produca linguaggio senza pensiero rispetto a un testo prodotto da un essere pensante. Non esistono, oggi, software pensanti, perciò gli "AI detector" non esistono.
Che ne pensi?
Per avere una differenza rintracciabile automaticamente, dovresti
marcare in qualche modo il testo sintetico, che però, immagino, potrà comunque essere copiato come solo testo.
Grazie. Un saluto, Daniela ________________________________________ Da: nexa<nexa-bounces@server-nexa.polito.it> per conto di Enrico
Nardelli<nardelli@mat.uniroma2.it>
Inviato: sabato 17 maggio 2025 19:53 A:nexa@server-nexa.polito.it Oggetto: [Junk released by Allowed List] [nexa] Qual è il senso...?
... di realizzare strumenti per la verifica che un testo sia stato generato dall'Intelligenza Artificiale, visto che (oltre a sbagliare clamorosamente su un mio testo scritto interamente da me che è stato attribuito per il 71% all'IA - ma l'ultima volta che mi sono controllato mi son trovato sempre fatto della solita materia organica... 😂) poi la stessa società offre uno strumento per "umanizzare" il testo?
Io ho provato GPTZero, giusto per fare nome e cognome, ma penso che altre startup simili mettano a disposizione prodotti simili.
Mi sa tanto di un gigantesco gioco delle 3 carte...
------------------------------------------------------------------------ many of us believe the EU remains the most extraordinary, ambitious, liberal political alliance in recorded history. where it needs reform, where it needs to evolve, we should be there to help turn that heavy wheel Ian McEwan, The Guardian, 2/6/2017 ------------------------------------------------------------------------ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
------------------------------------------------------------------------
l’imbarazzante senso di colpa per la nostra vita che va avanti coi suoi momenti di gioia e svago mentre quella di altri non ne ha S. Vizio
------------------------------------------------------------------------ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
participants (1)
-
Dario Guarascio