AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025 We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity. The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales. In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control. In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology. In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks. We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality <https://www.aisnakeoil.com/p/ai-as-normal-technology>
Unfortunately, in today's work environment - where many people are at home and rely on network-based services not only for work but also for entertainment (Netflix and the like) - the infrastructure is under stress and the download may fail several times. When you click the yellow DOWNLOAD button, the download starts in the background without any visible activity on your screen. We suggest you open the DOWNLOAD page in your browser to check the status. Please do not close the browser until you are sure the download is complete. If the download does not start, please try a different server. To change servers, click on the INFO link just below the yellow DOWNLOAD button and select a server from the list that appears on the page that opens when you click on the INFO link. Please note that you can retry the download as many times as you like, as long as you do not close the browser window before the download is complete. You can ignore the donation request, as donations are optional and help us keep the project alive. If all goes well, when the download is complete, a window will open asking you to choose whether to run the installer or save the file to your hard drive. Please choose the second option and save the file to your DOWNLOADS folder. Then open the DOWNLOADS folder and double-click on the file (the name starts with LibreOffice and ends with .MSI on Windows and .DMG on MacOS). Follow the on-screen instructions carefully until the end of the installation process. If everything goes well, LibreOffice will be installed correctly and will appear in your menu (for Windows) or in your Applications folder (for MacOS). Please do not use torrents as they are not reliable (as of today). Sorry for the long message. I hope this helps. Best regards. On 5/4/25 14:56, Daniela Tafani wrote:
AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control.
In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology.
In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks.
We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality
-- Italo Vignoli -italo@vignoli.org mobile/signal/telegram +39.348.5653829
Scusate, Thunderbird è impazzito all'improvviso e ha risposto a questa lista invece che alla richiesta di supporto per LibreOffice On 5/4/25 21:03, Italo Vignoli wrote:
Unfortunately, in today's work environment - where many people are at home and rely on network-based services not only for work but also for entertainment (Netflix and the like) - the infrastructure is under stress and the download may fail several times.
When you click the yellow DOWNLOAD button, the download starts in the background without any visible activity on your screen. We suggest you open the DOWNLOAD page in your browser to check the status. Please do not close the browser until you are sure the download is complete.
If the download does not start, please try a different server. To change servers, click on the INFO link just below the yellow DOWNLOAD button and select a server from the list that appears on the page that opens when you click on the INFO link.
Please note that you can retry the download as many times as you like, as long as you do not close the browser window before the download is complete. You can ignore the donation request, as donations are optional and help us keep the project alive.
If all goes well, when the download is complete, a window will open asking you to choose whether to run the installer or save the file to your hard drive. Please choose the second option and save the file to your DOWNLOADS folder.
Then open the DOWNLOADS folder and double-click on the file (the name starts with LibreOffice and ends with .MSI on Windows and .DMG on MacOS). Follow the on-screen instructions carefully until the end of the installation process.
If everything goes well, LibreOffice will be installed correctly and will appear in your menu (for Windows) or in your Applications folder (for MacOS).
Please do not use torrents as they are not reliable (as of today).
Sorry for the long message. I hope this helps. Best regards.
On 5/4/25 14:56, Daniela Tafani wrote:
AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control.
In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology.
In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks.
We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality
<https://www.aisnakeoil.com/p/ai-as-normal-technology> -- Italo Vignoli -italo@vignoli.org mobile/signal/telegram +39.348.5653829
-- Italo Vignoli -italo@vignoli.org mobile/signal/telegram +39.348.5653829
grazie della segnalazione. il paper linkato è di 52 pagine quindi non azzardo commenti anche se le prime pagine che ho letto mi paiono sensate e centrate. in ogni caso il tema che gli autori pongono mi pare rilevante:
The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. ovvio che chi non ha mai visto un martello potrebbe decidere di vederci una rappresentazione del divino ma sappiamo che serve per piantare chiodi; e nel piantare chiodi è molto utile per tutti quelli che prima usavano pietre prese al fiume per piantare i chiodi; e così facendo spesso si schiacciavano le unghie o storcevano i chiodi.
Maurizio Il 04/05/25 14:56, Daniela Tafani ha scritto:
AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control.
In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology.
In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks.
We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality
------------------------------------------------------------------------ if we spent on prosecuting the rich just one-tenth of the effort we spend prosecuting the poor it would be a very different world Bruce Schneier ------------------------------------------------------------------------ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
Ciao, Maurizio, riprendono alcune tesi che hanno già sostenuto altrove: - nel passaggio da ricerca a commercio, ossia da una scoperta o un'invenzione allo sviluppo di prodotti, è decisiva la distinzione tra capacità e affidabilità: un sistema che effettui correttamente la prenotazione di un volo aereo 9 volte su 10 è definito, tecnicamente, come capace di farlo, ma nessuno lo vorrebbe, come "agente" di viaggio. Ed è improbabile che le tecniche che consentono di arrivare al 90% di correttezza ci portino fino al 100%. Per i prodotti, come le auto a guida autonoma, in cui quel 10% di errori è costoso, non sono accettabili sperimentazioni nel mondo reale; - i benchmark non misurano l'utilità nel mondo reale; - l'idea della superintelligenza su fonda su errori concettuali e logici: l'intelligenza non è né una caratteristica fissa e stabile né omogenea e misurabile su scala unidimensionale. Non esiste un'accezione utile del termine "intelligenza" in cui l'IA sia più intelligente delle persone che la usano. Le situazioni reali in cui siano necessarie capacità di computazione velocissima sono limitate; - la regolazione promuove, non ostacola, diffusione e innovazione; in ogni caso, la maggior parte dei settori in cui l'IA verrà usata è già regolata. Chi sostiene che manchi una regolazione dell'IA sostiene una regolazione modello-centrica, che non ha senso; - human-in-the-loop e allineamento dei modelli non sono soluzioni; - se consideriamo l'IA come una tecnologia normale, terremo presenti gli effetti più verosimili della sua introduzione, anziché quelli fantascientici. Ad esempio, l'aumento di pratiche estrattive e quindi di disuguaglianza. Un saluto, Daniela ________________________________________ Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di maurizio lana <maurizio.lana@uniupo.it> Inviato: martedì 6 maggio 2025 14:24 A: nexa@server-nexa.polito.it Oggetto: Re: [nexa] AI as Normal Technology grazie della segnalazione. il paper linkato è di 52 pagine quindi non azzardo commenti anche se le prime pagine che ho letto mi paiono sensate e centrate. in ogni caso il tema che gli autori pongono mi pare rilevante: The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. ovvio che chi non ha mai visto un martello potrebbe decidere di vederci una rappresentazione del divino ma sappiamo che serve per piantare chiodi; e nel piantare chiodi è molto utile per tutti quelli che prima usavano pietre prese al fiume per piantare i chiodi; e così facendo spesso si schiacciavano le unghie o storcevano i chiodi. Maurizio Il 04/05/25 14:56, Daniela Tafani ha scritto: AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025 We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity. The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales. In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control. In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology. In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks. We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality <https://www.aisnakeoil.com/p/ai-as-normal-technology><https://www.aisnakeoil.com/p/ai-as-normal-technology> ________________________________ if we spent on prosecuting the rich just one-tenth of the effort we spend prosecuting the poor it would be a very different world Bruce Schneier ________________________________ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
Sembrerebbero quasi tesi lapalissiane, ma... non dopo uno scroll veloce degli ultimi 100 post su Linkedin. Grazie per la condivisione. Stefano Inviato con l'email sicura Proton Mail. martedì 6 maggio 2025 15:58, Daniela Tafani <daniela.tafani@unipi.it> ha scritto:
Ciao, Maurizio,
riprendono alcune tesi che hanno già sostenuto altrove:
- nel passaggio da ricerca a commercio, ossia da una scoperta o un'invenzione allo sviluppo di prodotti, è decisiva la distinzione tra capacità e affidabilità: un sistema che effettui correttamente la prenotazione di un volo aereo 9 volte su 10 è definito, tecnicamente, come capace di farlo, ma nessuno lo vorrebbe, come "agente" di viaggio. Ed è improbabile che le tecniche che consentono di arrivare al 90% di correttezza ci portino fino al 100%. Per i prodotti, come le auto a guida autonoma, in cui quel 10% di errori è costoso, non sono accettabili sperimentazioni nel mondo reale;
- i benchmark non misurano l'utilità nel mondo reale;
- l'idea della superintelligenza su fonda su errori concettuali e logici: l'intelligenza non è né una caratteristica fissa e stabile né omogenea e misurabile su scala unidimensionale. Non esiste un'accezione utile del termine "intelligenza" in cui l'IA sia più intelligente delle persone che la usano. Le situazioni reali in cui siano necessarie capacità di computazione velocissima sono limitate;
- la regolazione promuove, non ostacola, diffusione e innovazione; in ogni caso, la maggior parte dei settori in cui l'IA verrà usata è già regolata. Chi sostiene che manchi una regolazione dell'IA sostiene una regolazione modello-centrica, che non ha senso;
- human-in-the-loop e allineamento dei modelli non sono soluzioni;
- se consideriamo l'IA come una tecnologia normale, terremo presenti gli effetti più verosimili della sua introduzione, anziché quelli fantascientici. Ad esempio, l'aumento di pratiche estrattive e quindi di disuguaglianza.
Un saluto, Daniela
________________________________________ Da: nexa nexa-bounces@server-nexa.polito.it per conto di maurizio lana maurizio.lana@uniupo.it
Inviato: martedì 6 maggio 2025 14:24 A: nexa@server-nexa.polito.it Oggetto: Re: [nexa] AI as Normal Technology
grazie della segnalazione. il paper linkato è di 52 pagine quindi non azzardo commenti anche se le prime pagine che ho letto mi paiono sensate e centrate. in ogni caso il tema che gli autori pongono mi pare rilevante: The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. ovvio che chi non ha mai visto un martello potrebbe decidere di vederci una rappresentazione del divino ma sappiamo che serve per piantare chiodi; e nel piantare chiodi è molto utile per tutti quelli che prima usavano pietre prese al fiume per piantare i chiodi; e così facendo spesso si schiacciavano le unghie o storcevano i chiodi.
Maurizio
Il 04/05/25 14:56, Daniela Tafani ha scritto:
AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control.
In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology.
In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks.
We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality
https://www.aisnakeoil.com/p/ai-as-normal-technologyhttps://www.aisnakeoil.c...
________________________________
if we spent on prosecuting the rich just one-tenth of the effort we spend prosecuting the poor it would be a very different world Bruce Schneier
________________________________ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
Ciao, Stefano,
_______________________________________ Da: nexa <nexa-bounces@server-nexa.polito.it> per conto di Stefano Borroni Barale <s.barale@erentil.net>
Sembrerebbero quasi tesi lapalissiane, ma... non dopo uno scroll veloce degli ultimi 100 post su Linkedin.
Fai 101 e guarda anche quello che segue. Se credi all'"intelligenza artificiale" in senso letterale, puoi anche interrogarti sull'opportunità di continuare a generare "bio bambini": As it happens, this alarming attitude toward our extinction-through-replacement-with-AI came up in an interview with the famed computer scientist Jaron Lanier, published this month. “So,” the interviewer asked Lanier, “does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you?” Lanier replied: What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. … Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical. <https://www.truthdig.com/articles/the-endgame-of-edgelord-eschatology/> martedì 6 maggio 2025 15:58, Daniela Tafani <daniela.tafani@unipi.it> ha scritto:
Ciao, Maurizio,
riprendono alcune tesi che hanno già sostenuto altrove:
- nel passaggio da ricerca a commercio, ossia da una scoperta o un'invenzione allo sviluppo di prodotti, è decisiva la distinzione tra capacità e affidabilità: un sistema che effettui correttamente la prenotazione di un volo aereo 9 volte su 10 è definito, tecnicamente, come capace di farlo, ma nessuno lo vorrebbe, come "agente" di viaggio. Ed è improbabile che le tecniche che consentono di arrivare al 90% di correttezza ci portino fino al 100%. Per i prodotti, come le auto a guida autonoma, in cui quel 10% di errori è costoso, non sono accettabili sperimentazioni nel mondo reale;
- i benchmark non misurano l'utilità nel mondo reale;
- l'idea della superintelligenza su fonda su errori concettuali e logici: l'intelligenza non è né una caratteristica fissa e stabile né omogenea e misurabile su scala unidimensionale. Non esiste un'accezione utile del termine "intelligenza" in cui l'IA sia più intelligente delle persone che la usano. Le situazioni reali in cui siano necessarie capacità di computazione velocissima sono limitate;
- la regolazione promuove, non ostacola, diffusione e innovazione; in ogni caso, la maggior parte dei settori in cui l'IA verrà usata è già regolata. Chi sostiene che manchi una regolazione dell'IA sostiene una regolazione modello-centrica, che non ha senso;
- human-in-the-loop e allineamento dei modelli non sono soluzioni;
- se consideriamo l'IA come una tecnologia normale, terremo presenti gli effetti più verosimili della sua introduzione, anziché quelli fantascientici. Ad esempio, l'aumento di pratiche estrattive e quindi di disuguaglianza.
Un saluto, Daniela
________________________________________ Da: nexa nexa-bounces@server-nexa.polito.it per conto di maurizio lana maurizio.lana@uniupo.it
Inviato: martedì 6 maggio 2025 14:24 A: nexa@server-nexa.polito.it Oggetto: Re: [nexa] AI as Normal Technology
grazie della segnalazione. il paper linkato è di 52 pagine quindi non azzardo commenti anche se le prime pagine che ho letto mi paiono sensate e centrate. in ogni caso il tema che gli autori pongono mi pare rilevante: The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory. ovvio che chi non ha mai visto un martello potrebbe decidere di vederci una rappresentazione del divino ma sappiamo che serve per piantare chiodi; e nel piantare chiodi è molto utile per tutti quelli che prima usavano pietre prese al fiume per piantare i chiodi; e così facendo spesso si schiacciavano le unghie o storcevano i chiodi.
Maurizio
Il 04/05/25 14:56, Daniela Tafani ha scritto:
AI as Normal Technology An alternative to the vision of AI as a potential superintelligence By Arvind Narayanan and Sayash Kapoor April 15, 2025
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future. The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we view as incoherent as usually conceptualized). In this world, control is primarily in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is AI control.
In Part III, we examine the implications of AI as normal technology for AI risks. We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike. Of course, we cannot be certain of our predictions, but we aim to describe what we view as the median outcome. We have not tried to quantify probabilities, but we have tried to make predictions that can tell us whether or not AI is behaving like normal technology.
In Part IV, we discuss the implications for AI policy. We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks.
We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology— the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality
https://www.aisnakeoil.com/p/ai-as-normal-technologyhttps://www.aisnakeoil.c...
________________________________
if we spent on prosecuting the rich just one-tenth of the effort we spend prosecuting the poor it would be a very different world Bruce Schneier
________________________________ Maurizio Lana Università del Piemonte Orientale Dipartimento di Studi Umanistici Piazza Roma 36 - 13100 Vercelli
Ma diamogli ragione, Daniela! Il 6 Maggio 2025 15:34:05 UTC, Daniela Tafani ha scritto:
Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.
Per chi crede nella favola della AI, riprodursi è immorale. Peccato che la selezione naturale sia così lenta... :-) Giacomo
participants (5)
-
Daniela Tafani -
Giacomo Tesio -
Italo Vignoli -
maurizio lana -
Stefano Borroni Barale