Amazon is determined to use AI for everything – even when it slows down work
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’. Varsha Bansal Wed 11 Mar 2026 When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks. The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”. “I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.” Just days after speaking to the Guardian, Dina was laid off. Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI. She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’” Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions. “We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.” This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI. In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries. “There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.” While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.” Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop. “In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.” Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools. “I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them. Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools. “We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.” There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”. “I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said. Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve. Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said. Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own. Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for. “One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said. The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said. An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled. For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI. Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”. Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.) Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly. Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said. “Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan. The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.” Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI. “We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”. “While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.” At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote. The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools. As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up. “If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.” Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”. “All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”. <https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona. In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio). È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...) Juan Carlos On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Caro JC, su quelli left puzzling aggiungici chi insegna: ma che devono insegnare oggi se non si ha la più pallida idea (o scelta) sul domani? Quanto alla proposta politica, è il secondo passaggio. Quali sono i molti e dispersi controinteressati alla prosecuzione della logica del profitto Ai enhanced? E quali azioni possono coalizzarli? Il soggetto politico è a valle… credo [cid:image001.jpg@01DCB45A.50898B80] __________________________________________ Prof. Avv. Marco Ricolfi C.so Galileo Ferraris, 43 - 10128 Torino T (+39) 011.554.54.11 F (+39) 011.518.45.87 E marco.ricolfi@weigmann.it<mailto:marco.ricolfi@weigmann.it> PEC marcoricolfi@pec.ordineavvocatitorino.it<mailto:marcoricolfi@pec.ordineavvocatitorino.it> www.weigmann.it<https://urlsand.esvalabs.com/?u=http%3A%2F%2Fwww.weigmann.it%2F&e=6b170c62&h...> [cid:image002.jpg@01DCB45A.50898B80] Member of The Parlex Group of European Lawyers EEIG with associated law firms in the main capitals of the European Union, U.S.A., Israel and Malaysia; web site: www.parlex.org<https://urlsand.esvalabs.com/?u=http%3A%2F%2Fwww.parlex.org%2F&e=6b170c62&h=...> DISCLAIMER: Le informazioni contenute in questa comunicazione sono riservate e destinate esclusivamente alla/e persona/e o all'ente/i destinatario. È vietato a soggetti diversi dai destinatari di questa comunicazione qualsiasi uso, copia o diffusione delle informazioni e dei dati in essa contenuti, sia ai sensi dell'art. 616 c.p. sia ai sensi del Regolamento (UE) 2016/679. Se questa comunicazione Vi è pervenuta per errore, Vi preghiamo di informarci chiamando il numero (+39) 011.554.54.11, ovvero di rispondere a questa e-mail e successivamente, di cancellare dal Vostro sistema la e-mail ed ogni suo allegato. DISCLAIMER: The information contained in the e-mail is confidential and intended only for the attention of the named individual(s) or organisation(s) to whom it is addressed. If you are not the intended recipient be aware that any use, copying or distribution of the information contained herein is prohibited pursuant to Article 616 of the Italian Penal Code and (EU) Regulation 2016/679. If the communication has been sent to you in error, please notify us by telephone on (+39) 011.554.54.11, or reply to the e-mail. Please then delete the e-mail and any attachments from your system. Da: nexa <nexa-bounces@server-nexa.polito.it> Per conto di J.C. DE MARTIN via nexa Inviato: domenica 15 marzo 2026 08:55 A: Nexa <nexa@server-nexa.polito.it> Cc: J.C. DE MARTIN <juancarlos.demartin@polito.it> Oggetto: Re: [nexa] Amazon is determined to use AI for everything – even when it slows down work E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona. In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio). È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...) Juan Carlos On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
Capisco bene la tua irritazione per questa naturalizzazione della tecnologia: si tratta di una vecchia strategia ideologica. Detto questo, mi sembra che il punto davvero decisivo stia un po’ più a valle. Non basta affermare che l’IA dovrebbe servire al benessere collettivo invece che all’accumulazione privata. Questo, in fondo, lo pensano già moltissime persone. Il problema non è l’assenza di visioni, ma la scarsa capacità di realizzare anche le più fattibili (ad esempio: costruire piattaforme sociali autonome). Marx lo diceva nelle Tesi su Feuerbach: non è la coscienza che cambia il mondo; il mondo cambia quando cambiano le pratiche materiali e i rapporti sociali. Le idee “giuste” esistono quasi sempre già. Ciò che manca è la capacità politica, economica, scientifica, per renderle operative. Per questo motivo, secondo me, la domanda interessante non è tanto quale visione radicale dell’IA dovremmo formulare (ce ne sono già molte), ma quali politiche, quali ricerche, alleanze sociali e dispositivi economici potrebbero effettivamente orientarne lo sviluppo in una direzione autenticamente sociale. Altrimenti si rischia un paradosso: criticare la naturalizzazione della tecnologia per poi affidarsi, simmetricamente, alla magia performativa delle idee. Come se bastasse dire “l’IA deve servire alla collettività” perché lo faccia, o dire “basta con l’AI" (stile Tajani) possa cambiare il corso degli eventi, o enunciare “serve il CERN per l’AI” in qualche convegno faccia accadere concretamente qualcosa. Buona domenica, G.
Il giorno 15 mar 2026, alle ore 08:55, J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> ha scritto:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
No, non intendevo alcun idealismo, Guido, sono stato troppo conciso. Con l'espressione "una proposta politica radicalmente alternativa" intendo non solo idee o visioni (che sono però una precondizione necessaria, anche se insufficiente), ma soprattutto qualcosa che abbia capacità di incidere politicamente. Che cosa? /Vaste programme/.... Intanto però possiamo e dobbiamo contrastare il "senso comune" con cui si parla e si scrive di IA e più in generale di tecnologia. Perché noi in questa sede possiamo anche aver capito tutto, ma là fuori la vasta maggioranza ha introiettato un modo di pensare alla tecnologia che mi fa venire in mente Flaiano (in realtà Barilli), ovvero, sono tutti ansiosi di correre in soccorso del vincitore... Quindi anche solo a livello delle idee, di discorsi, di narrazioni, di frame, di paradigmi c'è comunque parecchio lavoro da fare. Ciao, JC On 15/03/26 09:20, Guido Vetere wrote:
Capisco bene la tua irritazione per questa naturalizzazione della tecnologia: si tratta di una vecchia strategia ideologica.
Detto questo, mi sembra che il punto davvero decisivo stia un po’ più a valle. Non basta affermare che l’IA /dovrebbe /servire al benessere collettivo invece che all’accumulazione privata. Questo, in fondo, lo pensano già moltissime persone. Il problema non è l’assenza di visioni, ma la scarsa capacità di realizzare anche le più fattibili (ad esempio: costruire piattaforme sociali autonome).
Marx lo diceva nelle /Tesi su Feuerbach/: non è la coscienza che cambia il mondo; il mondo cambia quando cambiano le pratiche materiali e i rapporti sociali. Le idee “giuste” esistono quasi sempre già. Ciò che manca è la capacità politica, economica, scientifica, per renderle operative.
Per questo motivo, secondo me, la domanda interessante non è tanto quale visione radicale dell’IA dovremmo formulare (ce ne sono già molte), ma quali politiche, quali ricerche, alleanze sociali e dispositivi economici potrebbero effettivamente orientarne lo sviluppo in una direzione autenticamente sociale.
Altrimenti si rischia un paradosso: criticare la naturalizzazione della tecnologia per poi affidarsi, simmetricamente, alla magia performativa delle idee. Come se bastasse dire “l’IA deve servire alla collettività” perché lo faccia, o dire “basta con l’AI" (stile Tajani) possa cambiare il corso degli eventi, o enunciare “serve il CERN per l’AI” in qualche convegno faccia accadere concretamente qualcosa.
Buona domenica,
G.
Il giorno 15 mar 2026, alle ore 08:55, J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> ha scritto:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Concordo; questa è la ragione per la quale quando (come in Accademia delle Scienze, ma ovunque) ci si chiede di parlare di AI, è bene i. cercare di dirottare la richiesta su temi meno automaticamente elogiativi, ad es. gli ecosistemi digitali; e ii. Se i non riesce, chiarire (agli interlocutori e poi al pubblico) che AI è strumenti di distruzione e controllo di massa, oltre alle belle (?) cose che si raccontano [cid:image001.jpg@01DCB45F.7795EFC0] __________________________________________ Prof. Avv. Marco Ricolfi C.so Galileo Ferraris, 43 - 10128 Torino T (+39) 011.554.54.11 F (+39) 011.518.45.87 E marco.ricolfi@weigmann.it<mailto:marco.ricolfi@weigmann.it> PEC marcoricolfi@pec.ordineavvocatitorino.it<mailto:marcoricolfi@pec.ordineavvocatitorino.it> www.weigmann.it<https://urlsand.esvalabs.com/?u=http%3A%2F%2Fwww.weigmann.it%2F&e=6b170c62&h...> [cid:image002.jpg@01DCB45F.7795EFC0] Member of The Parlex Group of European Lawyers EEIG with associated law firms in the main capitals of the European Union, U.S.A., Israel and Malaysia; web site: www.parlex.org<https://urlsand.esvalabs.com/?u=http%3A%2F%2Fwww.parlex.org%2F&e=6b170c62&h=...> DISCLAIMER: Le informazioni contenute in questa comunicazione sono riservate e destinate esclusivamente alla/e persona/e o all'ente/i destinatario. È vietato a soggetti diversi dai destinatari di questa comunicazione qualsiasi uso, copia o diffusione delle informazioni e dei dati in essa contenuti, sia ai sensi dell'art. 616 c.p. sia ai sensi del Regolamento (UE) 2016/679. Se questa comunicazione Vi è pervenuta per errore, Vi preghiamo di informarci chiamando il numero (+39) 011.554.54.11, ovvero di rispondere a questa e-mail e successivamente, di cancellare dal Vostro sistema la e-mail ed ogni suo allegato. DISCLAIMER: The information contained in the e-mail is confidential and intended only for the attention of the named individual(s) or organisation(s) to whom it is addressed. If you are not the intended recipient be aware that any use, copying or distribution of the information contained herein is prohibited pursuant to Article 616 of the Italian Penal Code and (EU) Regulation 2016/679. If the communication has been sent to you in error, please notify us by telephone on (+39) 011.554.54.11, or reply to the e-mail. Please then delete the e-mail and any attachments from your system. Da: nexa <nexa-bounces@server-nexa.polito.it> Per conto di J.C. De Martin via nexa Inviato: domenica 15 marzo 2026 09:34 A: Guido Vetere <vetere.guido@gmail.com> Cc: J.C. DE MARTIN <juancarlos.demartin@polito.it>; Nexa <nexa@server-nexa.polito.it> Oggetto: Re: [nexa] Amazon is determined to use AI for everything – even when it slows down work No, non intendevo alcun idealismo, Guido, sono stato troppo conciso. Con l'espressione "una proposta politica radicalmente alternativa" intendo non solo idee o visioni (che sono però una precondizione necessaria, anche se insufficiente), ma soprattutto qualcosa che abbia capacità di incidere politicamente. Che cosa? Vaste programme.... Intanto però possiamo e dobbiamo contrastare il "senso comune" con cui si parla e si scrive di IA e più in generale di tecnologia. Perché noi in questa sede possiamo anche aver capito tutto, ma là fuori la vasta maggioranza ha introiettato un modo di pensare alla tecnologia che mi fa venire in mente Flaiano (in realtà Barilli), ovvero, sono tutti ansiosi di correre in soccorso del vincitore... Quindi anche solo a livello delle idee, di discorsi, di narrazioni, di frame, di paradigmi c'è comunque parecchio lavoro da fare. Ciao, JC On 15/03/26 09:20, Guido Vetere wrote: Capisco bene la tua irritazione per questa naturalizzazione della tecnologia: si tratta di una vecchia strategia ideologica. Detto questo, mi sembra che il punto davvero decisivo stia un po’ più a valle. Non basta affermare che l’IA dovrebbe servire al benessere collettivo invece che all’accumulazione privata. Questo, in fondo, lo pensano già moltissime persone. Il problema non è l’assenza di visioni, ma la scarsa capacità di realizzare anche le più fattibili (ad esempio: costruire piattaforme sociali autonome). Marx lo diceva nelle Tesi su Feuerbach: non è la coscienza che cambia il mondo; il mondo cambia quando cambiano le pratiche materiali e i rapporti sociali. Le idee “giuste” esistono quasi sempre già. Ciò che manca è la capacità politica, economica, scientifica, per renderle operative. Per questo motivo, secondo me, la domanda interessante non è tanto quale visione radicale dell’IA dovremmo formulare (ce ne sono già molte), ma quali politiche, quali ricerche, alleanze sociali e dispositivi economici potrebbero effettivamente orientarne lo sviluppo in una direzione autenticamente sociale. Altrimenti si rischia un paradosso: criticare la naturalizzazione della tecnologia per poi affidarsi, simmetricamente, alla magia performativa delle idee. Come se bastasse dire “l’IA deve servire alla collettività” perché lo faccia, o dire “basta con l’AI" (stile Tajani) possa cambiare il corso degli eventi, o enunciare “serve il CERN per l’AI” in qualche convegno faccia accadere concretamente qualcosa. Buona domenica, G. Il giorno 15 mar 2026, alle ore 08:55, J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it><mailto:nexa@server-nexa.polito.it> ha scritto: E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona. In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio). È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...) Juan Carlos On 12/03/26 14:29, Daniela Tafani wrote: Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’. Varsha Bansal Wed 11 Mar 2026 When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks. The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”. “I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.” Just days after speaking to the Guardian, Dina was laid off. Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI. She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’” Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions. “We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.” This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI. In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries. “There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.” While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.” Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop. “In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.” Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools. “I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them. Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools. “We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.” There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”. “I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said. Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve. Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said. Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own. Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for. “One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said. The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said. An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled. For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI. Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”. Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.) Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly. Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said. “Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan. The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.” Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI. “We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”. “While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.” At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote. The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools. As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up. “If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.” Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”. “All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”. <https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-artificial-intelligence><https://url.de.m.mimecastprotect.com/s/c6KeCmq1rgI4jXwtGfzSRoAs8?domain=theguardian.com>
Caro Juan Carlos, sostituire il lavoro col capitale, anche se con costi iniziali maggiori risponde perfettamente alla logica estrattiva del capitalismo industriale se non viene regolato. Le ragioni presumo siano che la macchina incorpora non solo il lavoro presente, ma anche quello futuro: fare a meno dei lavoratori e delle lavoratrici, e imparare a farne a sempre più a meno, aumenterà il profitto e soprattutto la /prevedibilità/ del processo industriale, al riparo da sindacati, contributi e pandemie. Le politiche socialiste in passato miravano a regolare ridistribuendo i profitti, ma non credo tendessero altrettanto a sostituire lavoro umano a quello della macchina. Una proposta politica alternativa "nella quale qualsiasi tecnologia deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle delle persone" credo possa essere condivisa da molti ma come applicarla in un contesto politico in cui prevale l'industria? Una proposta veramente radicale verrà vista come anti-industrialista: la peggior eresia della modernità. Per come la vedo è necessario non tanto un nuovo pensiero sulla tecnologia, ma sul suo impiego da parte dell'industria, sia essa capitalista o pianificata (in Cina non mi pare la musica sia diversa). Occorre appoggiarsi a una idea di capitalismo industriale orientato alla collettività (e all'ambiente) di cui si sono visti troppi pochi esempi, sempre lodati, ma come si lodano di solito i santi: non pensando certo di emularli. Non occorre più solo ridistribuire i profitti, ma discutere del ruolo relativo di industria e lavoro (anche linguistico) umano, e di chi deve prevalere sull'altro. Quello che vediamo accadere con Amazon è già una applicazione avanzata di questo processo estrattivo, ma si vede di peggio all'orizzonte: l'industrializzazione di ogni processo della vita umana, con l'intenzione di continuare ad estrarre conoscenza da associare a processi sempre più automatizzati. Qui sotto un esempio degli "AI agents" che interpellano umani (anche ignari) per portare a termine i loro compiti: l'idea è quella dell'umano-come-API.: <https://www.noemamag.com/ai-agents-are-recruiting-humans-to-observe-the-offl...> When an agent hits this wall, it does what software always does: It calls an application programming interface (API), a mechanism that enables one system to communicate with another. Only now, the API is a human. [] Our agentic future is being built upon physical-world subtasks routed to humans so agents can proceed. A core worry <https://knightcolumbia.org/content/levels-of-autonomy-for-ai-agents-1> is that agents will become autonomous enough to dictate the terms of human participation while still recruiting us for sensing and liability. At an institutional scale, this is the beginning of a world in which people are less /in the loop/ and more /on call/, less empowered <https://arxiv.org/abs/2601.19062> by the technology and more enslaved by its tempo [] The sensing is distributed. The logic is centralized. The burden falls unevenly. This is an externality that current evaluation frameworks miss entirely. An agent that helps me by querying the people around me is externalizing costs onto my social network. The most responsive people become the most burdened. Responsiveness becomes a tax. [] Once human sensing is treated as an explicit part of the system, uncomfortable patterns are sure to emerge. More sensing can degrade performance. Overload a content moderator with AI-flagged posts and she might default to “approve,” respond carelessly or stop reading. The agent then becomes confidently wrong in its assessments because the person feeding it information disengaged. Over time, constant micro-verifications erode professional judgment. The human gets better at confirming but worse at reasoning. Being observed by an AI agent can also change what can be reliably measured. A person who knows she is being monitored may not report the world accurately. People may begin to self-censor, optimize for what the system rewards or stop surfacing inconvenient truths. The cost extends beyond accuracy. Research shows <http://v> that persistent surveillance induces hypervigilance and erodes mental health. Sometimes, less information can produce better outcomes. For example, withholding demographic data from a hiring agent may prevent discrimination that full access would enable. The best agent knows what matters. No more, no less. Un caro saluto, Alberto On 3/15/26 09:33, J.C. De Martin via nexa wrote:
No, non intendevo alcun idealismo, Guido, sono stato troppo conciso.
Con l'espressione "una proposta politica radicalmente alternativa" intendo non solo idee o visioni (che sono però una precondizione necessaria, anche se insufficiente), ma soprattutto qualcosa che abbia capacità di incidere politicamente. Che cosa? /Vaste programme/....
Intanto però possiamo e dobbiamo contrastare il "senso comune" con cui si parla e si scrive di IA e più in generale di tecnologia. Perché noi in questa sede possiamo anche aver capito tutto, ma là fuori la vasta maggioranza ha introiettato un modo di pensare alla tecnologia che mi fa venire in mente Flaiano (in realtà Barilli), ovvero, sono tutti ansiosi di correre in soccorso del vincitore...
Quindi anche solo a livello delle idee, di discorsi, di narrazioni, di frame, di paradigmi c'è comunque parecchio lavoro da fare.
Ciao,
JC
On 15/03/26 09:20, Guido Vetere wrote:
Capisco bene la tua irritazione per questa naturalizzazione della tecnologia: si tratta di una vecchia strategia ideologica.
Detto questo, mi sembra che il punto davvero decisivo stia un po’ più a valle. Non basta affermare che l’IA /dovrebbe /servire al benessere collettivo invece che all’accumulazione privata. Questo, in fondo, lo pensano già moltissime persone. Il problema non è l’assenza di visioni, ma la scarsa capacità di realizzare anche le più fattibili (ad esempio: costruire piattaforme sociali autonome).
Marx lo diceva nelle /Tesi su Feuerbach/: non è la coscienza che cambia il mondo; il mondo cambia quando cambiano le pratiche materiali e i rapporti sociali. Le idee “giuste” esistono quasi sempre già. Ciò che manca è la capacità politica, economica, scientifica, per renderle operative.
Per questo motivo, secondo me, la domanda interessante non è tanto quale visione radicale dell’IA dovremmo formulare (ce ne sono già molte), ma quali politiche, quali ricerche, alleanze sociali e dispositivi economici potrebbero effettivamente orientarne lo sviluppo in una direzione autenticamente sociale.
Altrimenti si rischia un paradosso: criticare la naturalizzazione della tecnologia per poi affidarsi, simmetricamente, alla magia performativa delle idee. Come se bastasse dire “l’IA deve servire alla collettività” perché lo faccia, o dire “basta con l’AI" (stile Tajani) possa cambiare il corso degli eventi, o enunciare “serve il CERN per l’AI” in qualche convegno faccia accadere concretamente qualcosa.
Buona domenica,
G.
Il giorno 15 mar 2026, alle ore 08:55, J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> ha scritto:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Caro Juan Carlos, prendo avvio dalla tua conclusione, sull'urgenza di elaborare una visione del mondo in cui l'IA sia utile alla collettività, solo per aggiungere una considerazione a margine. L'"intelligenza artificiale", in quanto famiglia di tecnologie che potrebbero essere sviluppate a favore delle persone e delle comunità, non ha bisogno di questa denominazione antropomorfa, visto che si tratta semplicemente di software, in genere proprietario, che gira su computer. L'espressione "intelligenza artificiale" è necessaria, invece, a chi la impieghi per denominare: 1. i software di sorveglianza utilizzati in violazione di diritti già giuridicamente tutelati; 2. una famiglia di truffe; 3. un progetto politico, sociale e culturale. I software di sorveglianza e la famiglia di truffe sono utili alla rete oligarchica, tecnologica, finanziaria e militare, per realizzare un progetto che include automazione (sostituzione di capitale a lavoro), deumanizzazione, eliminazione delle democrazie, dello stato sociale e dei diritti fondamentali e ulteriore spostamento di risorse verso l'alto. Per questo progetto, sistemi apparentemente non funzionanti, come i sistemi di ottimizzazione predittiva, funzionano in realtà benissimo, purché si tenga presente che la loro funzione è punire i poveri e automatizzare lo status quo. Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio. Di questo progetto, non saprei cosa salvare o reindirizzare. Credo che demistificarlo non sia politicamente inutile. Non è una questione nuova: è il problema che Kant affrontava nel 1784 nel saggio sull'illuminismo, <https://btfp.sp.unipi.it/dida/kant_7/ar01s04.xhtml>. Un caro saluto, Daniela ________________________________________ Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> Inviato: domenica 15 marzo 2026 08:55 A: Nexa Cc: J.C. DE MARTIN Oggetto: Re: [nexa] Amazon is determined to use AI for everything – even when it slows down work E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona. In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio). È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...) Juan Carlos On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Ma certo, Daniela, sono d'accordo. Sono anni, infatti, che uso quell'espressione tra virgolette, auspicandone al contempo l'abolizione. Preferisco, come sai, parlare di "computerizzazione del mondo". Un caro saluto, Juan Carlos On 15/03/26 15:16, Daniela Tafani wrote:
Caro Juan Carlos,
prendo avvio dalla tua conclusione, sull'urgenza di elaborare una visione del mondo in cui l'IA sia utile alla collettività, solo per aggiungere una considerazione a margine.
L'"intelligenza artificiale", in quanto famiglia di tecnologie che potrebbero essere sviluppate a favore delle persone e delle comunità, non ha bisogno di questa denominazione antropomorfa, visto che si tratta semplicemente di software, in genere proprietario, che gira su computer.
L'espressione "intelligenza artificiale" è necessaria, invece, a chi la impieghi per denominare:
1. i software di sorveglianza utilizzati in violazione di diritti già giuridicamente tutelati; 2. una famiglia di truffe; 3. un progetto politico, sociale e culturale.
I software di sorveglianza e la famiglia di truffe sono utili alla rete oligarchica, tecnologica, finanziaria e militare, per realizzare un progetto che include automazione (sostituzione di capitale a lavoro), deumanizzazione, eliminazione delle democrazie, dello stato sociale e dei diritti fondamentali e ulteriore spostamento di risorse verso l'alto.
Per questo progetto, sistemi apparentemente non funzionanti, come i sistemi di ottimizzazione predittiva, funzionano in realtà benissimo, purché si tenga presente che la loro funzione è punire i poveri e automatizzare lo status quo.
Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio. Di questo progetto, non saprei cosa salvare o reindirizzare.
Credo che demistificarlo non sia politicamente inutile. Non è una questione nuova: è il problema che Kant affrontava nel 1784 nel saggio sull'illuminismo, <https://btfp.sp.unipi.it/dida/kant_7/ar01s04.xhtml>.
Un caro saluto, Daniela
________________________________________ Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> Inviato: domenica 15 marzo 2026 08:55 A: Nexa Cc: J.C. DE MARTIN Oggetto: Re: [nexa] Amazon is determined to use AI for everything – even when it slows down work
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Sì, ho adottato stabilmente la tua espressione non ha appena l'hai coniata. Il punto è che la tecnologia rende più potente chi lo sia già e possa dunque controllarne traiettorie di sviluppo e applicazioni. E chi ha il potere nella parte di mondo in cui viviamo decide, ad esempio, che un estrusore di stringhe di testo probabili va benissimo per decidere chi uccidere. Se quello che hai in mente è un genocidio, può sembrare uno strumento veloce, in effetti, e utilmente deresponsabilizzante. La tua domanda finale resta valida, naturalmente. Osservavo solo che il sistema attuale, tardo capitalistico, con tratti feudali e un aperto gangsterismo (da impero in accelerata rovina) sta utilizzando la tecnologia da par suo. Buona domenica, Daniela ________________________________________ Da: J.C. De Martin <demartin@polito.it> Inviato: domenica 15 marzo 2026 15:36 A: Daniela Tafani; Nexa Oggetto: Re: R: [nexa] Amazon is determined to use AI for everything – even when it slows down work Ma certo, Daniela, sono d'accordo. Sono anni, infatti, che uso quell'espressione tra virgolette, auspicandone al contempo l'abolizione. Preferisco, come sai, parlare di "computerizzazione del mondo". Un caro saluto, Juan Carlos On 15/03/26 15:16, Daniela Tafani wrote:
Caro Juan Carlos,
prendo avvio dalla tua conclusione, sull'urgenza di elaborare una visione del mondo in cui l'IA sia utile alla collettività, solo per aggiungere una considerazione a margine.
L'"intelligenza artificiale", in quanto famiglia di tecnologie che potrebbero essere sviluppate a favore delle persone e delle comunità, non ha bisogno di questa denominazione antropomorfa, visto che si tratta semplicemente di software, in genere proprietario, che gira su computer.
L'espressione "intelligenza artificiale" è necessaria, invece, a chi la impieghi per denominare:
1. i software di sorveglianza utilizzati in violazione di diritti già giuridicamente tutelati; 2. una famiglia di truffe; 3. un progetto politico, sociale e culturale.
I software di sorveglianza e la famiglia di truffe sono utili alla rete oligarchica, tecnologica, finanziaria e militare, per realizzare un progetto che include automazione (sostituzione di capitale a lavoro), deumanizzazione, eliminazione delle democrazie, dello stato sociale e dei diritti fondamentali e ulteriore spostamento di risorse verso l'alto.
Per questo progetto, sistemi apparentemente non funzionanti, come i sistemi di ottimizzazione predittiva, funzionano in realtà benissimo, purché si tenga presente che la loro funzione è punire i poveri e automatizzare lo status quo.
Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio. Di questo progetto, non saprei cosa salvare o reindirizzare.
Credo che demistificarlo non sia politicamente inutile. Non è una questione nuova: è il problema che Kant affrontava nel 1784 nel saggio sull'illuminismo, <https://btfp.sp.unipi.it/dida/kant_7/ar01s04.xhtml>.
Un caro saluto, Daniela
________________________________________ Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> Inviato: domenica 15 marzo 2026 08:55 A: Nexa Cc: J.C. DE MARTIN Oggetto: Re: [nexa] Amazon is determined to use AI for everything – even when it slows down work
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. Tutto viene scaricato sui lavoratori, che possono solo subire, oppure, se giovani, devono spremersi le meningi per cercare di capire (con la palla di vetro?) quali professioni saranno - tra X anni - meno a rischio di disoccupazione tecnologica... (tutti idraulici, ciclo-fattorini, muratori?). Il tutto giustificato ideologicamente da chi accorre subito a spiegare che è giusto così, che è "efficiente", che nel lungo termine è la cosa migliore per tutti, ignorando che quello che capiterà nel lungo termine è determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel, Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in campo qualsiasi innovazione tecnologica totalmente a prescindere dalle conseguenze sociali. In altre parole, esiste solo un diritto, per di più assoluto: il diritto di fare profitti (qui e subito). Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città o ceti sociali si trovano senza sussistenza dall'oggi al domani? Problemi di quelle persone, o, al limite, problema da lasciare a quello stesso Welfare State che però gli stessi Thiel, Musk, Merz, ecc. vorrebbero smantellare, lasciando a rigor di logica come unica opzione futura l'eliminazione fisica dei tanti esseri umani che ormai non sono più utili per lor signori (oppure, versione più "umana": continuino pure vivere i subumani inutili con qualche forma di reddito di base, chiusi nelle loro stanzette con droghe e un visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa. Una visione del mondo nella quale l'IA (e in generale qualsiasi tecnologia) deve servire a migliorare la vita della collettività, non essere quasi esclusivamente uno strumento nelle mani di pochi per accumulare ancora più capitale e potere sulla pelle di moltissime persone indifese? (E non parliamo dell'IA al servizio della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows down work Corporate employees said Amazon’s race to roll out AI is leading to surveillance, slop and ‘more work for everyone’.
Varsha Bansal Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more time than if she’s done the task without AI.
She doesn’t take issue with the AI tools themselves, but rather the company’s logic in pushing all employees to use them daily. “You don’t look at the problem and go, ‘How do I use this hammer I have?’ she said. “You look at it and go, ‘Is this a problem for a hammer or something else?’”
Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI Read more More than a half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing. The Guardian granted these workers anonymity because of their fear of professional repercussions.
“We have hundreds of thousands of corporate employees in a wide range of roles across many different businesses, each of which is using AI in different ways to learn about what works best for their use cases,” Montana MacLachlan, an Amazon spokesperson, said. “While different employees may have different experiences, what we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the last four months – nearly 10% of its roughly 350,000 corporate workforce. Its cuts are part of a wave of recent AI-connected tech layoffs, including at Block, Pinterest and Autodesk. Exactly how much these companies will be able to rely on AI to replace headcount is unclear, and each company has given an array of sometimes contradictory reasons for reductions. Jack Dorsey, the Block CEO, said outright that AI was behind his 40% staffing cuts, while Pinterest and Autodesk said they were redirecting investments to AI. Amazon has waffled in explaining how AI factors into its layoff decisions, saying both that it would lead to reductions, but that recent cuts weren’t AI-driven. The company said in February it would spend some $200bn this year on AI infrastructure and announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions Amazon makes around automation – and even how it talks about these shifts – will be consequential for not just its massive workforce, but for people in industries around the world. Amazon is the second-largest employer in the US and has long influenced workplace practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of these practices – about performance, surveillance and monitoring – are somewhat imported from the warehouse and the drivers space, and that it is Amazon expanding this model of labor to white collar workers,” Jack, a software engineer at Amazon for more than a decade, said. “It does feel like we’re at the vanguard of a new stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the impact of its AI campaign has pressurized its workplace, workers said. “It’s worse now,” said Denny, a software engineer, who works in the retail space at the company. “If we don’t pivot ... then we risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask is whether it can be done faster with AI tools, according to Denny. This is leading employees to use AI tools just for the sake of it. Recently, someone in Denny’s team shared that an internal AI agent had saved him about a week of developer effort on a feature. But when Denny looked at the actual code review, he found dozens of comments from colleagues pointing out basic issues. The AI generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to change, and [could] even be potentially longer,” said Denny. “This pressure to use [AI] has resulted in worse quality code, but also just more work for everyone.”
Denny was one of several workers who told the Guardian they’re pressured to use an overwhelming array of AI tools, many of which were hastily developed in internal hackathons and then have to spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like: ‘Why don’t you try using this thing?’, and it was just the result of a hackathon,” said Denny. He says the tools are “half-baked” and unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage engineers to develop new projects. Sometime last year, Denny recalls, the company primarily switched to generative AI hackathons, during which the majority of projects ended up being developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan. “However, we believe these tools can help employees work more efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to Amazon’s embrace of AI. According to a February FT report, Amazon recently experienced at least two outages because of issues with the company’s internal AI tools, including a 13-hour interruption to a customer-facing system in December after some engineers allowed its AI tool “to make certain changes”. Amazon, however, said that an employee, rather than AI, caused the service interruption. The FT reported on Tuesday that Amazon would convene engineers to explore “a spate of outages, including incidents tied to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every single aspect, you’re going to get more errors like that,” Sarah, an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best realized when engineers decide how to use it. But at Amazon, even when AI is not suited for a task, she’s now expected to train it. “We have to write out detailed procedures so that the AI can understand it and give better output,” said Sarah. “Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.” She’s early in her career and worries that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa, founding director of the AI and Future of Work Program at Emory University and the author of The Quantified Worker, usually backfires. “Generally, employees are in a better position [than management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees plenty of AI training videos on their learning portals, though most of them are optional. When he’s attended training sessions, “the focus is always, ‘here’s how to build something as quickly as possible’”. He said trainers – who are typically peer employees who are also AI power users – advise to carefully review each step before letting AI start building. At the same time, Will said: “I have been in several trainings where the instructor says you can just ask the AI to check its own work.” However, you can’t fully rely on AI to detect its own mistakes; that’s something human judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees feel that AI increases their productivity is whether management encourages it and provides training,” Alex Imas, professor of behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of surveillance ... Nick Srnicek MacLachlan said Amazon provides different training and resources for people across the company, including structured options. “Employees are encouraged to use the tools themselves as a learning mechanism, adopting a learn-as-you-work approach that is proving to be one of the most practical and effective methods of AI adoption across the company,” she said.
An AI-fueled shift to surveillance Along with the productivity challenges that have come with Amazon’s AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an internal system called Amazon Connections would greet them with a message and ask for feedback on topics like how their teams were functioning, or how satisfied they felt with their work. Over the last year, these questions have increasingly centered less on human factors and more on AI.
Maria, a former product manager who was laid off from Amazon in January, said questions asking her about her career or team shifted to more often focus on AI: “‘Are you using AI in your daily work?,’ ‘How often are you using it?,’ ‘Do you think that you’re a power user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers said managers at Amazon have a dashboard where they track their team members’ AI use, including if they’re using certain tools and how often they do so. (The Information first reported this in February.)
Jack, the software developer who’s worked at Amazon for more than a decade, said the company also launched a different dashboard, which the Guardian has viewed, so teams could see their generative AI adoption, engagement and depth of usage. “Every team treats it differently,” he said, with some managers using it with a goal of getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other reports he checks this dashboard daily. “He’s really been pushing our AI usage,” she said.
“Of course we want to understand what tools our teams are using and whether those tools are working well for them or could be improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is surveillance, according to Nick Srnicek, author of Platform Capitalism and a senior lecturer in digital economy at King’s College London. “The rushed deployment of AI means an uncritical expansion of surveillance since these tools increasingly require detailed knowledge of personal workflows and data,” he said. “To make them more capable means giving management greater insight and control over workers’ everyday activities.”
Workers also said they suspect their career advancement is increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions like, ‘What has this person done?’, ‘What impact did it have?’ – and now it also has a question asking, ‘How [did] they leverage AI?’,” said Lisa. “I think they want to only keep the people who support this investment [in AI] and are going to try and filter out people who do not support it or have concerns about it.” The Wall Street Journal reported in late February that at Amazon, “managers do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to make work more engaging and improve customers’ lives, we don’t instruct managers to consider AI utilization as part of our evaluation process,” said MacLachlan. “Instead, we focus on AI adoption and sharing best practices to celebrate innovation and operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his AI expectations for his employees. In a company-wide email last June, he predicted that AI-driven productivity gains would reduce the company’s corporate workforce, and urged workers to embrace AI. “Educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can, participate in your team’s brainstorms to figure out how to invent for our customers more quickly and expansively, and how to get more done with scrappier teams,” he wrote.
The unspoken math That same company-wide email prompted heavy internal pushback at Amazon last summer, with employees slamming Jassy’s leadership and speaking of the demoralizing impact of the company’s AI push, according to Business Insider. Months later, over 1,000 workers signed a petition that raised concerns about the company’s “aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing revenue numbers each quarter. Though Jassy has repeatedly said that these layoffs are neither “financially-driven” nor AI-driven, for Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title,” she said, explaining the company’s logic behind cutting jobs. “That’s the unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide all-hands meeting last spring. According to a Business Insider report about this meeting, Jassy responded to a question about running Amazon as “the world’s largest startup”, and said they want to be “scrappy” to “do a lot more things”. He also warned that their competitors are the “most technically able, most hungry” companies, including startups “working seven days a week, 15 hours a day”.
“All of those things put together was an implicit threat that the people remaining at the company are expected to work longer and harder,” said Jack. It “really struck home to me that if [Amazon] can’t amass profits with endless growth, then it can get a little bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-art...>
Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio. Di questo progetto, non saprei cosa salvare o reindirizzare.
Finalmente lo stanno capendo anche gli utenti, a cominciare da quelli Windows, che in massa (almeno quelli che con il PC ci lavorano seriamente) stanno chiedendo di poter disabilitare Copilot. Il CEO di Microsoft più volte [1][2] è dovuto intervenire per difendere la scelta di andare verso un'integrazione indissolubile tra S.O. e AI. I sw per rimuovere AI da Windows si moltiplicano (ironia, su Github di proprietà Microsoft), e hanno decine di migliaia di "stars" [3][4] Il 2026 sarà l'anno dello scoppio della bolla AI? E' presto per dirlo, ma aspettiamoci un "ci giochiamo il tutto per tutto" dai tizi dell'Epstein tech. Politica? Media? P.A.? Società civile? Non pervenuti. "L'IA? Ah, quella cosa che mi permette video morphing e infografiche in cinque minuti, bello, bello, ma i problemi del mondo sono altri, signori miei". A. [1] https://www.techradar.com/ai-platforms-assistants/microsoft-ceo-urges-ai-dev... [2] https://techcrunch.com/2026/01/05/microsofts-nadella-wants-us-to-stop-thinki... [3] https://github.com/zoicware/RemoveWindowsAI [4] https://github.com/builtbybel/Winslop
Grazie, Antonio,
Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di antonio <antonio@piumarossa.it>
Il 2026 sarà l'anno dello scoppio della bolla AI? E' presto per dirlo, ma aspettiamoci un "ci giochiamo il tutto per tutto" dai tizi dell'Epstein tech. Politica? Media? P.A.? Società civile? Non pervenuti.
Non pervenuti sarebbe già qualcosa. Il problema è che sono spesso ricattati, catturati, cooptati, premiati con un ruolo o un guiderdone che, entro una gerarchia feudale, è più che sufficiente per non capire quello che si è pagati per non capire. Sono parte integrante di questo sistema le università che praticano la scienza chiusa, i professori che si fanno sgherri* e le riviste che garantiscono l'omologazione. Per quel che può valere un aneddoto personale, il mio articolo sull'etica come specchietto per le allodole** sarebbe stato accettato dalla rivista di classe A alla quale lo presentai inizialmente e su cui avevo già pubblicato solo se - richiesta comunicatami a voce da un membro della redazione - lo avessi sottoposto a una revisione sostanziale che avrebbe dovuto mettere in luce anche gli aspetti positivi (lo ritirai e fu poi pubblicato sul Bollettino telematico di filosofia politica). E' il supplizio dei pro e dei contro (lo denunciava già Weizenbaum), caratteristico della razionalità esclusivamente strumentale degli psicopatici e dei regimi neoliberali. Un caro saluto, Daniela * https://zenodo.org/records/18164570 ** https://zenodo.org/records/7799775
On 2026-03-15, Daniela Tafani wrote: [...]
I software di sorveglianza e la famiglia di truffe sono utili alla rete oligarchica, tecnologica, finanziaria e militare, per realizzare un progetto che include automazione (sostituzione di capitale a lavoro), deumanizzazione, eliminazione delle democrazie, dello stato sociale e dei diritti fondamentali e ulteriore spostamento di risorse verso l'alto.
[...]
Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio.
cinquanta minuti di applausi, brava! [...] -- 380° (lost in /traslation/) «Welcome to the chaos of the times If you go left and I go right Pray we make it out alive This is Karmageddon»
Buonasera, 380°
I software di sorveglianza e la famiglia di truffe sono utili alla rete oligarchica, tecnologica, finanziaria e militare, per realizzare un progetto che include automazione (sostituzione di capitale a lavoro), deumanizzazione, eliminazione delle democrazie, dello stato sociale e dei diritti fondamentali e ulteriore spostamento di risorse verso l'alto. Questo progetto potrebbe chiamarsi Epstein tech - considerato l'approccio autoritario, suprematista, coloniale, eugenetico, razzista, misogeno, pedofilo e estrattivo - ma capisco che, a fini di marketing, "intelligenza artificiale" suoni meglio.
cinquanta minuti di applausi, brava!
In realtà l'elenco è incompleto. Avevo dimenticato l'approccio meritocratico - ovviamente con la funzione di legittimazione e beatificazione dell'élite - e l'adesione all'economia comportamentale, con le sue spinte gentili. Me ne sono resa conto leggendo <https://newlinesmag.com/argument/the-academic-justification-for-male-suprema...> Un caro saluto, Daniela
On 15/03/26 08:55, J.C. DE MARTIN via nexa wrote:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada.
questa e' molto spesso la narrazione usata come giustificazione di ristutturazioni aziendali, per evitare di essere bastonati dai mercati. mi diletto a scavare dietro le notizie di "licenziamenti a causa dell'AI". la piccola azienda di cybersecurity e' in crisi e licenzia un grafico ? e' colpa dell'AI la piccola multinazionale americana cresciuta come ricavi (e ancor di piu' come perdite), a botte di acquisizioni dopo l'ingresso di un fondo di PE chiude la sede italiana ? e' colpa dell'AI (anche se i 4 fondatori hanno lasciato la barca gia' qualche mese fa) l'azienda elefantiaca licenzia sotto pressione del board ? e' perche' adesso c'e' l'AI (anche se l'azienda precedente del CEO, con una nuova gestione, ha licenziato pre-AI 3/4 del personale, in continuita' di servizio) l'azienda quotata licenzia centinaia di persone prima della trimestrale ? e' perche' "faremo con l'AI". salvo poi riassumere dopo la trimestrale "perche' vogliamo il tocco umano". Intanto in trimestrale i costi di quel personale sono scivolati come oneri di ristrutturazione sotto l'EBITDA... insomma, secondo me si fa gia' qualcosa di utile denunciando l'uso dell'AI come scusa... ciao, s. -- You can reach me on Signal: @quinta.01 (no Whatsapp, no Telegram)
Grazie per questa autorevole conferma di quanto sospettavo fortemente... Ciao, JC On 16/03/26 11:18, Stefano Quintarelli wrote:
On 15/03/26 08:55, J.C. DE MARTIN via nexa wrote:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada.
questa e' molto spesso la narrazione usata come giustificazione di ristutturazioni aziendali, per evitare di essere bastonati dai mercati.
mi diletto a scavare dietro le notizie di "licenziamenti a causa dell'AI".
la piccola azienda di cybersecurity e' in crisi e licenzia un grafico ? e' colpa dell'AI
la piccola multinazionale americana cresciuta come ricavi (e ancor di piu' come perdite), a botte di acquisizioni dopo l'ingresso di un fondo di PE chiude la sede italiana ? e' colpa dell'AI (anche se i 4 fondatori hanno lasciato la barca gia' qualche mese fa)
l'azienda elefantiaca licenzia sotto pressione del board ? e' perche' adesso c'e' l'AI (anche se l'azienda precedente del CEO, con una nuova gestione, ha licenziato pre-AI 3/4 del personale, in continuita' di servizio)
l'azienda quotata licenzia centinaia di persone prima della trimestrale ? e' perche' "faremo con l'AI". salvo poi riassumere dopo la trimestrale "perche' vogliamo il tocco umano". Intanto in trimestrale i costi di quel personale sono scivolati come oneri di ristrutturazione sotto l'EBITDA...
insomma, secondo me si fa gia' qualcosa di utile denunciando l'uso dell'AI come scusa...
ciao, s.
Sul tema, mi perfetto di segnalare: https://antonioaloisi.substack.com/p/why-ais-flop-is-a-crisis-of-imagination Il giorno 16 mar 2026, alle ore 11:26, J.C. DE MARTIN via nexa <nexa@server-nexa.polito.it> ha scritto: Grazie per questa autorevole conferma di quanto sospettavo fortemente... Ciao, JC On 16/03/26 11:18, Stefano Quintarelli wrote: On 15/03/26 08:55, J.C. DE MARTIN via nexa wrote: E' davvero affascinante (e infuriante)... rapporto dopo rapporto, articolo dopo articolo, è evidente la premessa ideologica di tutti questi discorsi, nessuno escluso: l'IA viene considerata esattamente come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene... e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada. questa e' molto spesso la narrazione usata come giustificazione di ristutturazioni aziendali, per evitare di essere bastonati dai mercati. mi diletto a scavare dietro le notizie di "licenziamenti a causa dell'AI". la piccola azienda di cybersecurity e' in crisi e licenzia un grafico ? e' colpa dell'AI la piccola multinazionale americana cresciuta come ricavi (e ancor di piu' come perdite), a botte di acquisizioni dopo l'ingresso di un fondo di PE chiude la sede italiana ? e' colpa dell'AI (anche se i 4 fondatori hanno lasciato la barca gia' qualche mese fa) l'azienda elefantiaca licenzia sotto pressione del board ? e' perche' adesso c'e' l'AI (anche se l'azienda precedente del CEO, con una nuova gestione, ha licenziato pre-AI 3/4 del personale, in continuita' di servizio) l'azienda quotata licenzia centinaia di persone prima della trimestrale ? e' perche' "faremo con l'AI". salvo poi riassumere dopo la trimestrale "perche' vogliamo il tocco umano". Intanto in trimestrale i costi di quel personale sono scivolati come oneri di ristrutturazione sotto l'EBITDA... insomma, secondo me si fa gia' qualcosa di utile denunciando l'uso dell'AI come scusa... ciao, s.
Sul tema, mi perfetto di segnalare: https://antonioaloisi.substack.com/p/why-ais-flop-is-a-crisis-of-imagination
Articolo molto interessante. "That’s why technologies end up being used to routinise practices, extract value, expand reporting duties and cultivate a climate of suspicion. A crisis of creativity, in other words, that kills AI’s potential through a spiral of bureaucratic busywork, eroding morale and engagement along the way" Ti rispondo da profondo conoscitore della burocrazia, da sempre contrastata e che fino ad una ventina d'anni fa mi ero illuso, nel mio piccolo, di averla debellata, grazie anche ad un uso creativo della tecnologia. Vittoria di Pirro. La creatività in molti ambienti è pericolosa, è un granello di sabbia negli ingranaggi di potere. Quando qualcosa funziona (parlo della P.A. ma il discorso si potrebbe estendere), basta creare dal nulla una nuova, moderna, Direzione Centrale (vi ricordate Bennato: "Non potrò mai diventare Direttore generale delle poste o delle ferrovie"), metterci un manager, magari esterno, magari che viene dal privato che fa fico, che nel giro di poco mette a tacere i "creativi", pone la SICUREZZA (informatica e non solo) come alibi e chissenefrega della produttività, dei servizi alla collettività, del benessere di chi lavora, ecc. Le pratiche, da cartacee diventano digitali ma richiedono ancora più tempo, il clima di sospetto, facilitato dal tracciamento "facile" aumenta a dismisura, così come si moltiplicano gli obblighi di rendicontazione (tanto basta compilare un form). Risultato, come hai scritto tu, morale giù e fine del coinvolgimento. A.
Ciao Quinta - forse non lo sai, ma quello che stai dicendo in risposta alle precedenti (…) È la stessa cosa che in questi giorni dice anche il “buon vecchio” San Altman, cito: “Data centers are getting blamed for electricity price hikes. Almost every company that does layoffs is blaming AI, whether or not it really is about AI.” D’altronde anche Altman dovrà dire una cosa giusta ogni tanto! ;) Il giorno lun 16 mar 2026 alle 11:18 AM Stefano Quintarelli via nexa < nexa@server-nexa.polito.it> ha scritto:
(..,)
questa e' molto spesso la narrazione usata come giustificazione di
ristutturazioni aziendali, per evitare di essere bastonati dai mercati.
mi diletto a scavare dietro le notizie di "licenziamenti a causa dell'AI".
la piccola azienda di cybersecurity e' in crisi e licenzia un grafico ? e' colpa dell'AI
la piccola multinazionale americana cresciuta come ricavi (e ancor di piu' come perdite), a botte di acquisizioni dopo l'ingresso di un fondo di PE chiude la sede italiana ? e' colpa dell'AI (anche se i 4 fondatori hanno lasciato la barca gia' qualche mese fa)
l'azienda elefantiaca licenzia sotto pressione del board ? e' perche' adesso c'e' l'AI (anche se l'azienda precedente del CEO, con una nuova gestione, ha licenziato pre-AI 3/4 del personale, in continuita' di servizio)
l'azienda quotata licenzia centinaia di persone prima della trimestrale ? e' perche' "faremo con l'AI". salvo poi riassumere dopo la trimestrale "perche' vogliamo il tocco umano". Intanto in trimestrale i costi di quel personale sono scivolati come oneri di ristrutturazione sotto l'EBITDA...
insomma, secondo me si fa gia' qualcosa di utile denunciando l'uso dell'AI come scusa...
ciao, s. -- You can reach me on Signal: @quinta.01 (no Whatsapp, no Telegram)
Salve Claudia Giulia, On Mon, 16 Mar 2026 17:28:24 +0100 Claudia Giulia Ferrauto wrote:
Ciao Quinta - forse non lo sai, ma quello che stai dicendo in risposta alle precedenti (…)
È la stessa cosa che in questi giorni dice anche il “buon vecchio” San Altman, cito:
“Data centers are getting blamed for electricity price hikes. Almost every company that does layoffs is blaming AI, whether or not it really is about AI.”
D’altronde anche Altman dovrà dire una cosa giusta ogni tanto! ;)
Le tattiche mamageriali descritte da Stefano sono ben note, tanto da essersi meritate un nome: "AI-washing" [1]. Tuttavia Altman ha un problema più urgente: se la quotazione azionaria di Open AI [2] fosse un flop (come alcuni prevedono alla luce, ad esempio, dei recenti investimenti di NVidia, ridotti rispetto alle dichiarazioni), l'intera bolla AI potrebbe esplodere con effetti devastanti per l'economia USA. Come evitarlo? Non sarà facile, ma certamente è importante che la narrazione pubblica del suo prodotto sia legato alle magnifiche sorti e progressive, non ad esternalità negative come licenziamenti e aumenti nei costi dell'energia e magari delle guerre necessarie per garantirsela. Giacomo [1] https://www.bloomberg.com/opinion/articles/2026-03-13/the-ai-washing-of-job-... [2] https://www.reuters.com/business/openai-lays-groundwork-juggernaut-ipo-up-1-...
On 2026-03-15, J.C. DE MARTIN via nexa wrote: [...]
È urgente elaborare una proposta politica radicalmente alternativa.
è urgentissimo implementare (ieri) un sistema finanziario radicalmente nuovo, che renda insostenibile il sistema attuale basato su pratiche predatorie, ladrocinio istituzionalizzato (incluso un incommensurabile black budget), scarsità artificiale, manipolazione delle valute, assegnazione di valore basato sulla narrazione anzichè su "asset" ancorati alla realtà, trappola del debito e altre tecniche ampiamente utilizzate per _estrarre_ valore dalla massa per convogliarlo verso (relativamente) pochi /eletti/ è ormai sempre più evidente che il _sistema_ attraverso il quale si perpetuano questi insostenibili sbilanciamenti di potere (di _agency_) - sia a livello macro-economico che a livello micro-economico [2] - è quello finanziario, con annesso (e simbolicamente imprescindibile) corollario _religioso_ di dogmi, riti, credenze e pratiche... citofonare Epstein & Co. la tecnologia, in particolare l'informatica, è _solo_ uno strumento che può essere utilizzato per il bene comune o per il privilegio di pochi... dipende da _chi_ lo progetta - che decide come deve funzionare lo strumento - e da chi lo implementa - che decide come _configurarlo_; se tutti potessimo partecipare alla progettazione e all'implementazione della tecnologia, specialmente quella finanziaria, il potere sarebbe molto meglio /distribuito/ ...immaginate se il sistema finanziario mondiale fosse gestito da un sistema /tipo/ GNU Taler https://www.taler.net/en/index.html dove i movimenti di valuta (valore) sono verificabili, ispezionabili e giustificati (associati a una transazione) in modo continuo: gradualmente ma inesorabilmente i "trucchetti" usati da pochi per depredare il resto dell'umenità diventerebbero insostenibili, _solo_ perché /esposti/ (cioè sarebbe finalmente possibile fare /debugging/, nel senso originario e letterale del termine). quindi, la proposta politica è _tecnica_... o anche tecnicamente politica :-D [...] un caro saluto, 380° [1] intere nazioni sotto ricatto, sanzionate, indebitate per generazioni a venire, invitate alla /compliance/ per poter partecipare al sistema e essere autorizzate a "raccogliere le briciole" [2] da categorie/gruppo di lavoratori fino ai singoli, a volte costretti a condizioni semi-schiavistiche pur di sopravvivere -- 380° (lost in /traslation/) «Welcome to the chaos of the times If you go left and I go right Pray we make it out alive This is Karmageddon»
Concordo pienamente ma ammetto di non sapere come e da che parte cominciare. Cosmo Carabellese. Il giorno lun 16 mar 2026 alle ore 12:48 380° via nexa < nexa@server-nexa.polito.it> ha scritto:
On 2026-03-15, J.C. DE MARTIN via nexa wrote:
[...]
È urgente elaborare una proposta politica radicalmente alternativa.
è urgentissimo implementare (ieri) un sistema finanziario radicalmente nuovo, che renda insostenibile il sistema attuale basato su pratiche predatorie, ladrocinio istituzionalizzato (incluso un incommensurabile black budget), scarsità artificiale, manipolazione delle valute, assegnazione di valore basato sulla narrazione anzichè su "asset" ancorati alla realtà, trappola del debito e altre tecniche ampiamente utilizzate per _estrarre_ valore dalla massa per convogliarlo verso (relativamente) pochi /eletti/
è ormai sempre più evidente che il _sistema_ attraverso il quale si perpetuano questi insostenibili sbilanciamenti di potere (di _agency_) - sia a livello macro-economico che a livello micro-economico [2] - è quello finanziario, con annesso (e simbolicamente imprescindibile) corollario _religioso_ di dogmi, riti, credenze e pratiche... citofonare Epstein & Co.
la tecnologia, in particolare l'informatica, è _solo_ uno strumento che può essere utilizzato per il bene comune o per il privilegio di pochi... dipende da _chi_ lo progetta - che decide come deve funzionare lo strumento - e da chi lo implementa - che decide come _configurarlo_; se tutti potessimo partecipare alla progettazione e all'implementazione della tecnologia, specialmente quella finanziaria, il potere sarebbe molto meglio /distribuito/
...immaginate se il sistema finanziario mondiale fosse gestito da un sistema /tipo/ GNU Taler https://www.taler.net/en/index.html dove i movimenti di valuta (valore) sono verificabili, ispezionabili e giustificati (associati a una transazione) in modo continuo: gradualmente ma inesorabilmente i "trucchetti" usati da pochi per depredare il resto dell'umenità diventerebbero insostenibili, _solo_ perché /esposti/ (cioè sarebbe finalmente possibile fare /debugging/, nel senso originario e letterale del termine).
quindi, la proposta politica è _tecnica_... o anche tecnicamente politica :-D
[...]
un caro saluto, 380°
[1] intere nazioni sotto ricatto, sanzionate, indebitate per generazioni a venire, invitate alla /compliance/ per poter partecipare al sistema e essere autorizzate a "raccogliere le briciole"
[2] da categorie/gruppo di lavoratori fino ai singoli, a volte costretti a condizioni semi-schiavistiche pur di sopravvivere
-- 380° (lost in /traslation/)
«Welcome to the chaos of the times If you go left and I go right Pray we make it out alive This is Karmageddon»
-- Cosmo Carabellese
Buongiorno Cosmo, On 2026-03-16, Cosmo Carabellese wrote:
Concordo pienamente ma ammetto di non sapere come e da che parte cominciare.
è impegnativo ma dopo una rapida introduzione: https://www.taler.net/en/principles.html l'unico modo che ho trovato per comprendere le caratteristiche di un sistema di pagamento del genere è studiare qualche paper della bibliografia: https://www.taler.net/en/bibliography.html ...magari iniziando da questo articolo: «Enabling Secure Web Payments with GNU Taler» https://www.taler.net/papers/taler2016space.pdf Comunque sia, ritengo fondamentale che il "ledger tracciabile" utilizzato sia uniforme e unico in tutto il mondo, non esiste che "i poverelli" usano il "ledger tracciabile" (tipo Bitcoin o GNU Taler) mentre le banche centrali e i loro controllori e amici degli amici usano sistemi off-ledger facilmente manipolabili. Happy hacking! 380° [...] -- 380° (lost in /traslation/) «Welcome to the chaos of the times If you go left and I go right Pray we make it out alive This is Karmageddon»
Salve. Fra il 2008 e il 2013 ho cercato -- "piuttosto invano" -- di far conoscere questo progetto: http://ybnd.eu/docs/Mat_fibra.pdf Al di là degli aspetti tecnici, aveva una rilevanza politico-culturale, nel tentare -- invano, come detto -- di svegliare la gente, far alzare i deretani dalle seggioline, e partecipare a un'iniziativa comune. DOPODICHÉ "essersi riappropriati del potere di comunicare" si sarebbe potuto declinare in un'offerta di servizi "da cittadino a cittadini" fuori dai soliti schemi accentrati / accentrati (tutto quello che non avrebbe richiesto grossi investimenti "fisici" avrebbe potuto rientrare nel novero, dalle telecomunicazioni alla finanza -- per questo ne parlo qui -- al controllo della logistica). Il 2026-03-16 13:58 380° via nexa ha scritto:
Buongiorno Cosmo,
On 2026-03-16, Cosmo Carabellese wrote:
Concordo pienamente ma ammetto di non sapere come e da che parte cominciare.
è impegnativo ma dopo una rapida introduzione: https://www.taler.net/en/principles.html
l'unico modo che ho trovato per comprendere le caratteristiche di un sistema di pagamento del genere è studiare qualche paper della bibliografia: https://www.taler.net/en/bibliography.html
...magari iniziando da questo articolo: «Enabling Secure Web Payments with GNU Taler» https://www.taler.net/papers/taler2016space.pdf
Comunque sia, ritengo fondamentale che il "ledger tracciabile" utilizzato sia uniforme e unico in tutto il mondo, non esiste che "i poverelli" usano il "ledger tracciabile" (tipo Bitcoin o GNU Taler) mentre le banche centrali e i loro controllori e amici degli amici usano sistemi off-ledger facilmente manipolabili.
Happy hacking! 380°
[...]
Scusate. Il messaggio precedente è partito prima che avessi finito... Questa seconda parte è per dire che su banca e moneta c'è stato un battage, dentro e fuori dei social, che non esiterei a definire "demente". La moneta (e la banca), per chi le capisce, non sono così "perverse" come e quanto vengono dipinte... E certe "soluzioni" sono del tutto insensate. Il sistema bancario e finanziario ha le sue pecche, come tutti i sistemi entro i quali finiscono con l'agire sfruttamento e ingordigia, ma il problema non necessariamente sta in regole o prassi che invece hanno senso (sfruttamento e ingordigia è più facile che stiano in altro, come ad es. le criptovalute: i Bitcoin, ad es. sono stati una catena di S.Antonio annunciata). Il 2026-03-16 13:58 380° via nexa ha scritto:
Buongiorno Cosmo,
On 2026-03-16, Cosmo Carabellese wrote:
Concordo pienamente ma ammetto di non sapere come e da che parte cominciare.
è impegnativo ma dopo una rapida introduzione: https://www.taler.net/en/principles.html
l'unico modo che ho trovato per comprendere le caratteristiche di un sistema di pagamento del genere è studiare qualche paper della bibliografia: https://www.taler.net/en/bibliography.html
...magari iniziando da questo articolo: «Enabling Secure Web Payments with GNU Taler» https://www.taler.net/papers/taler2016space.pdf
Comunque sia, ritengo fondamentale che il "ledger tracciabile" utilizzato sia uniforme e unico in tutto il mondo, non esiste che "i poverelli" usano il "ledger tracciabile" (tipo Bitcoin o GNU Taler) mentre le banche centrali e i loro controllori e amici degli amici usano sistemi off-ledger facilmente manipolabili.
Happy hacking! 380°
[...]
participants (14)
-
380° -
abregni -
Alberto Cammozzo -
antonio -
Antonio Aloisi -
Claudia Giulia Ferrauto -
Cosmo Carabellese -
Daniela Tafani -
Giacomo Tesio -
Guido Vetere -
J.C. DE MARTIN -
J.C. De Martin -
Marco Ricolfi -
Stefano Quintarelli