Things we learned about LLMs in 2024 [Being critical is a virtue]
Utile sintesi da parte di un tecnico, con molti elementi interessanti e spunti di riflessione senza posizioni ideologiche. Riporto per esteso in calce solo la parte "LLMs need better criticism" non per censurare il resto che merita senz'altro la lettura, ma perché mi pare che risponda a quanto si cerca di fare qui, ed in particolare mi pare attinente al dibattito in corso. <https://simonwillison.net/2024/Dec/31/llms-in-2024/> Things we learned about LLMs in 2024 A /lot/ has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments. [...] LLMs need better criticism A lot of people /absolutely hate/ this stuff. In some of the spaces I hang out (Mastodon <https://fedi.simonwillison.net/@simon>, Bluesky <https://bsky.app/profile/simonwillison.net>, Lobste.rs <https://lobste.rs/>, even Hacker News <https://news.ycombinator.com/> on occasion) even suggesting that “LLMs are useful” can be enough to kick off a huge fight. I get it. There are plenty of reasons to dislike this technology—the environmental impact, the (lack of) ethics of the training data, the lack of reliability, the negative applications, the potential impact on people’s jobs. LLMs absolutely warrant criticism. We need to be talking through these problems, finding ways to mitigate them and helping people learn how to use these tools responsibly in ways where the positive applications outweigh the negative. I /like/ people who are skeptical of this stuff. The hype has been deafening for more than two years now, and there are enormous quantities of snake oil and misinformation out there. A lot of /very bad/ decisions are being made based on that hype. Being critical is a virtue. If we want people with decision-making authority to make /good decisions/ about how to apply these tools we first need to acknowledge that there ARE good applications, and then help explain how to put those into practice while avoiding the many unintiutive traps. (If you still don’t think there are any good applications at all I’m not sure why you made it to this point in the article!) I think telling people that this whole field is environmentally catastrophic plagiarism machines that constantly make things up is doing those people a disservice, no matter how much truth that represents. There is genuine value to be had here, but getting to that value is unintuitive and needs guidance. Those of us who understand this stuff have a duty to help everyone else figure it out.
On Tue, Jan 07, 2025 at 10:56:53AM +0100, Alberto Cammozzo via nexa wrote:
Mi hai battuto sul tempo: ho finito di leggerlo stamattina e stavo per segnalarlo in lista :-) Vale la lunga lettura, soprattutto per chi non ha tempo di seguire nel dettaglio le evoluzioni tecniche (nel bene e nel male), di un mondo tecnologico tuttora in rapido cambiamento. Oltre alla parte "better criticism", che hai già evidenziato tu, ho trovato molto interessante anche la parte sul consumo energetico (ancora una volta: nel bene e nel male). A presto -- Stefano Zacchiroli . zack@upsilon.cc . https://upsilon.cc/zack _. ^ ._ Full professor of Computer Science o o o \/|V|\/ Télécom Paris, Polytechnic Institute of Paris o o o </> <\> Co-founder & CSO Software Heritage o o o o /\|^|/\ Mastodon: https://mastodon.xyz/@zacchiro '" V "'
On mar, 2025-01-07 at 10:56 +0100, Alberto Cammozzo via nexa wrote:
Utile sintesi da parte di un tecnico, con molti elementi interessanti e spunti di riflessione senza posizioni ideologiche. Riporto per esteso in calce solo la parte "LLMs need better criticism" non per censurare il resto che merita senz'altro la lettura, ma perché mi pare che risponda a quanto si cerca di fare qui, ed in particolare mi pare attinente al dibattito in corso. <https://simonwillison.net/2024/Dec/31/llms-in-2024/> Things we learned about LLMs in 2024 A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments. [...] LLMs need better criticism A lot of people absolutely hate this stuff. In some of the spaces I hang out (Mastodon, Bluesky, Lobste.rs, even Hacker News on occasion) even suggesting that “LLMs are useful” can be enough to kick off a huge fight. I get it. There are plenty of reasons to dislike this technology—the environmental impact, the (lack of) ethics of the training data, the lack of reliability, the negative applications, the potential impact on people’s jobs. LLMs absolutely warrant criticism. We need to be talking through these problems, finding ways to mitigate them and helping people learn how to use these tools responsibly in ways where the positive applications outweigh the negative. I like people who are skeptical of this stuff. The hype has been deafening for more than two years now, and there are enormous quantities of snake oil and misinformation out there. A lot of very bad decisions are being made based on that hype. Being critical is a virtue. If we want people with decision-making authority to make good decisions about how to apply these tools we first need to acknowledge that there ARE good applications, and then help explain how to put those into practice while avoiding the many unintiutive traps.
Fare traduzioni, riassunti e manipolazioni del linguaggio. Far fare altro e propagandarlo come soluzione al problema X è, non sono abituato a girarci attorno, semplicemente sbagliato, per non dire scorretto, per non dire criminale.
(If you still don’t think there are any good applications at all I’m not sure why you made it to this point in the article!) I think telling people that this whole field is environmentally catastrophic plagiarism machines that constantly make things up is doing those people a disservice, no matter how much truth that represents. There is genuine value to be had here, but getting to that value is unintuitive and needs guidance. Those of us who understand this stuff have a duty to help everyone else figure it out.
participants (3)
-
Alberto Cammozzo -
Marco A. Calamari -
Stefano Zacchiroli