Who or what is to blame when you’re harmed by an algorithm
Non mi sembra di averlo visto passare il lista, e credo risuoni con alcune discussioni che ho letto qui: https://go.technologyreview.com/who-or-what-is-to-blame-when-youre-harmed-by... «A Hong Kong tycoon lost more than $20 million dollars after entrusting part of his fortune to an automated platform. Without a legal framework to sue the technology, he placed the blame on the nearest human instead: the man who sold it to him. It’s the first known case over automated investment losses, but not the first involving the liability of algorithms. In March of 2018, a self-driving Uber struck and killed <https://www.theverge.com/2018/3/28/17174636/uber-self-driving-crash-fatal-ar...> a pedestrian in Tempe, Arizona, sending another case to court. A year later, Uber was exonerated <https://qz.com/1566048/uber-not-criminally-liable-in-tempe-self-driving-car-...> from all criminal liability, but the safety driver could face charges of vehicular manslaughter instead. […] In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behavior, they still bear the brunt of its failures. Elish calls this phenomenon a “moral crumple zone.” “While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved. This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted <https://www.theverge.com/2018/3/28/17174636/uber-self-driving-crash-fatal-ar...> to focus on the distraction of the driver. “We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interface with. Yet, in the current regulatory vacuum, they will continue to pay the steepest cost.»
not the first involving the liability of algorithms. In March of 2018
ricordo che il Prof Maiocchi (informatica all'Unimi), quando feci l'universita', aveva un libro con N esempi di liability degli algoritmi (ovvero dei produttori/venditori) ciao, s. On 24/05/2019 17:05, Enrico Poli wrote:
Non mi sembra di averlo visto passare il lista, e credo risuoni con alcune discussioni che ho letto qui:
https://go.technologyreview.com/who-or-what-is-to-blame-when-youre-harmed-by...
«A Hong Kong tycoon lost more than $20 million dollars after entrusting part of his fortune to an automated platform. Without a legal framework to sue the technology, he placed the blame on the nearest human instead: the man who sold it to him. It’s the first known case over automated investment losses, but not the first involving the liability of algorithms. In March of 2018, a self-driving Uberstruck and killed <https://www.theverge.com/2018/3/28/17174636/uber-self-driving-crash-fatal-arizona-update?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8OoVIbAZfwrpqcCVgk6-MFu45th10ZC55K2HFRC_DBYNpLzZIonvklf6Htajsq_WMzD6qktNOGQqs9p8rGnqFL9t4D-Q>a pedestrian in Tempe, Arizona, sending another case to court. A year later, Uber wasexonerated <https://qz.com/1566048/uber-not-criminally-liable-in-tempe-self-driving-car-death/?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8OoVIbAZfwrpqcCVgk6-MFu45th10ZC55K2HFRC_DBYNpLzZIonvklf6Htajsq_WMzD6qktNOGQqs9p8rGnqFL9t4D-Q>from all criminal liability, but the safety driver could face charges of vehicular manslaughter instead.
[…]
In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behavior, they still bear the brunt of its failures. Elish calls this phenomenon a “moral crumple zone.” “While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.
This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted <https://www.theverge.com/2018/3/28/17174636/uber-self-driving-crash-fatal-ar...> to focus on the distraction of the driver.
“We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interface with. Yet, in the current regulatory vacuum, they will continue to pay the steepest cost.»
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- reserve your meeting with me at http://cal.quintarelli.it
participants (2)
-
Enrico Poli -
Stefano Quintarelli