Balkin: The Three Laws of Robotics in the Age of Big Data
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2890965 The Three Laws of Robotics in the Age of Big Data Jack M. Balkin Yale University - Law School October 22, 2016 Ohio State Law Journal, Vol. 78, (2017), Forthcoming Abstract: In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the "laws of robotics" are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents. This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance. The homunculus fallacy is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions. The substitution effect refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people. The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The "three laws of robotics" for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves. Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed. With this in mind, we can state three basic "laws of robotics" for the Algorithmic Society: First, operators of robots, algorithms and artificial intelligence agents are information fiduciaries who have special duties of good faith and fair dealing toward their end-users, clients and customers. Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public. Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be algorithmic nuisances. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term "algorithmic nuisance" captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified "pollution" - that is, using computational power to make others pay for the costs of one's activities. Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency - and its cousins, accountability and due process - apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm. Number of Pages in PDF File: 22 Keywords: Asimov, robotics, algorithms, artificial intelligence, homunculus fallacy, substitution effect, information fiduciary, algorithmic nuisance JEL Classification: K10 (Sent from my wireless device; please excuse brevity and typos (if any))
Grazie Juan Carlos, Paper importante. Due punti mi piacciono molto: (1) parla di *lealtà* degli algoritmi. <<Fiduciaries have two central duties. The first is a duty of care. The second is a duty of loyalty. The duty of care means that a fiduciary has to act with reasonable care to avoid harming the client or patient. The duty of loyalty means that the fiduciary has to avoid creating conflicts of interest with their clients or patients and must look out for their interests. The degree of loyalty demanded depends on the nature of the relationship between the fiduciary and the client.>> Il modello di business attuale delle piattaforme "gratuite" impone di monetizzare le informazioni dell'utente: difficile essergli del tutto leale. (2) esplicita l'aspetto "ambientale" dell'inquinamento algoritmico: <<the public duty of algorithm operators is not to “pollute,” that is, unjustifiably externalize costs onto others.>> Alberto On 06/01/2017 21:11, J.C. DE MARTIN wrote:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2890965
The Three Laws of Robotics in the Age of Big Data
Jack M. Balkin
<https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=293225> Yale University - Law School
October 22, 2016
/ /Ohio State Law Journal, Vol. 78, (2017), Forthcoming <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2890965##>/ /
*Abstract: * In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the "laws of robotics" are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents.
This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.
The /homunculus fallacy/ is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.
The /substitution effect/ refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.
The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The "three laws of robotics" for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.
Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.
With this in mind, we can state three basic "laws of robotics" for the Algorithmic Society:
First, operators of robots, algorithms and artificial intelligence agents are /information fiduciaries/ who have special duties of good faith and fair dealing toward their end-users, clients and customers.
Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.
Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be /algorithmic nuisances/. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term "algorithmic nuisance" captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified "pollution" - that is, using computational power to make others pay for the costs of one's activities.
Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency - and its cousins, accountability and due process - apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.
*Number of Pages in PDF File:* 22
*Keywords:* Asimov, robotics, algorithms, artificial intelligence, homunculus fallacy, substitution effect, information fiduciary, algorithmic nuisance
*JEL Classification:* K10
(Sent from my wireless device; please excuse brevity and typos (if any))
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
Al volo mi aggrego ai ringraziamenti soprattutto per la ragione (1). Di lealtà (delle piattaforme e degli algoritmi) se ne sente parlare moltissimo qui in Francia dal 2013 (dopo i lavori del CNNum e del Consiglio di Stato a riguardo che proseguono ancora oggi - v. questo testo: https://cnnumerique.fr/tribune-loyaute-des-plateformes-daccord-mais-loyaute-...), ma in letteratura anglofona non si incontrano molti esempi. Balkin è uno di questi rari esempi, soprattutto nel suo articolo pubblicato un paio di anni fa nella UC Davis Law Rev.: https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf Quest'ultimo articolo è una buona lettura anche per un'altra ragione, legata a una questione di scottante attualità. Alcuni intermediari e fiduciari digitali (operatori di pubblicità online, fornitori d'accesso internet e compagnie di telefonia) hanno presentato una petizione alla FCC domandando che il loro diritto di tracciare gli utenti, di estrarre dati personali in deroga alle leggi sulla privacy, e di trattarli per scopi commerciali non specificati sia protetto in nome del primo emendamento della costituzione americana - cf.: "The creation, analysis, and transfer of consumer data for marketing purposes constitutes speech. Non-misleading commercial speech regarding a lawful activity is protected under the First Amendment." http://www.mediapost.com/publications/article/292165/ad-groups-petition-cons... Balkin nel suo articolo del 2015 spiegava in maniera molto vivace e chiara che i fiduciari d'informazione sono soggetti a restrizioni specifiche alla loro libertà d'espressione. Come un medico, per esempio, non può diffondere delle informazioni personali raccolte nel contesto di una consultazione in nome del free speech per ottenere un vantaggio personale a svantaggio dei propri pazienti, così le piattaforme non possono fare con i dati dei loro utenti tutto quello che vogliono a dei fini di personalizzazione dell'offerta e targeting pubblicitario, proprio perché hanno un dovere di lealtà relativo all'informazione che gli è confidata. "Commercial speech is a special case. Like contractual speech, it is a form of market behavior. Its social function is to induce people to buy goods and services. But unlike the speech involved in bargaining over contracts, commercial speech actively tries to reshape popular culture in order to sell products and services. It does so by providing information, ideas, images, and opinions to the general public, attempting to reshape people’s self-image, beliefs and desires. Commercial speech, in other words, is market behavior that attempts to alter public culture to facilitate itself. The hybrid nature of commercial speech — as affecting public opinion but not participating in public discourse — shapes the distinctive way that the First Amendment treats it. Constitutional doctrine does not treat advertisers as attempting to participate in public discourse. Rather, it views advertisers as adding information that might be valuable to people who are participating in public discourse. As the Court explained in Central Hudson, “[t]heFirst Amendment’s concern for commercial speech is based on the informational function of advertising.” Thus, the First Amendment focuses not on the right of the advertiser to speak but on the right of the public to receive the advertiser’s information. (...) How does the idea of an information fiduciary interact with the First Amendment? Does being an information fiduciary affect one’s free speech rights? Yes, it does. The First Amendment treats information practices by fiduciaries very differently than it treats information practices involving relative strangers. Suppose that you are a gynecologist, and in the course of your practice, you learn all sorts of interesting things about your patients, including the crazy things your patients say to you. You decide to comment on what you have learned by making a work of art, appropriately entitled, “Crazy Stuff My Patients Say.” You create an art installation at a contemporary art museum. It features a rotating cylinder with flashing lights on which are projected pictures of your patients, and quotes of interesting things that patients have said to you in the course of your practice; surrounding the cylinder, encased in plastic, are copies of case histories and medical reports. You present this to the public as a work of art that offers a profound commentary on the state of medicine and the nature of the human body in the twenty-first century. Your patients are understandably annoyed to see themselves featured in your art, and they sue you for malpractice or for breach of a professional (fiduciary) duty. You defend yourself on the grounds that you are an artist, and you point out that your work, like the best contemporary art, is designed to be transgressive. Yet the fact that you are an artist, who makes amazing works of art with your patients’ personal data and medical records, would probably not be a sufficient defense in a tort action for malpractice or breach of professional duty. Why is this? Well, you are using sensitive information to your advantage and to the disadvantage of your patients. Generally speaking, when the law prevents a fiduciary from disclosing or selling information about a client — or using information to a client’s disadvantage — this does not violate the First Amendment, even though the activity would be protected if there were no fiduciary relationship. " ----- Mail original ----- De: "Alberto Cammozzo" <ac+nexa@zeromx.net> À: nexa@server-nexa.polito.it Envoyé: Samedi 7 Janvier 2017 16:43:57 Objet: Re: [nexa] Balkin: The Three Laws of Robotics in the Age of Big Data Grazie Juan Carlos, Paper importante. Due punti mi piacciono molto: (1) parla di *lealtà* degli algoritmi. <<Fiduciaries have two central duties. The first is a duty of care. The second is a duty of loyalty. The duty of care means that a fiduciary has to act with reasonable care to avoid harming the client or patient. The duty of loyalty means that the fiduciary has to avoid creating conflicts of interest with their clients or patients and must look out for their interests. The degree of loyalty demanded depends on the nature of the relationship between the fiduciary and the client.>> Il modello di business attuale delle piattaforme "gratuite" impone di monetizzare le informazioni dell'utente: difficile essergli del tutto leale. (2) esplicita l'aspetto "ambientale" dell'inquinamento algoritmico: <<the public duty of algorithm operators is not to “pollute,” that is, unjustifiably externalize costs onto others.>> Alberto On 06/01/2017 21:11, J.C. DE MARTIN wrote:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2890965
The Three Laws of Robotics in the Age of Big Data
Jack M. Balkin
<https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=293225> Yale University - Law School
October 22, 2016
/ /Ohio State Law Journal, Vol. 78, (2017), Forthcoming <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2890965##>/ /
*Abstract: * In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the "laws of robotics" are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents.
This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.
The /homunculus fallacy/ is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.
The /substitution effect/ refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.
The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The "three laws of robotics" for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.
Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.
With this in mind, we can state three basic "laws of robotics" for the Algorithmic Society:
First, operators of robots, algorithms and artificial intelligence agents are /information fiduciaries/ who have special duties of good faith and fair dealing toward their end-users, clients and customers.
Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.
Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be /algorithmic nuisances/. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term "algorithmic nuisance" captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified "pollution" - that is, using computational power to make others pay for the costs of one's activities.
Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency - and its cousins, accountability and due process - apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.
*Number of Pages in PDF File:* 22
*Keywords:* Asimov, robotics, algorithms, artificial intelligence, homunculus fallacy, substitution effect, information fiduciary, algorithmic nuisance
*JEL Classification:* K10
(Sent from my wireless device; please excuse brevity and typos (if any))
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
participants (3)
-
Alberto Cammozzo -
Antonio Casilli -
J.C. DE MARTIN