MIT TechRev: "How to Hold Algorithms Accountable"
A View from Nicholas Diakopoulos and Sorelle Friedler *How to Hold Algorithms Accountable* / //Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that./ November 17, 2016 Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable. […] Continua qui: https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable...
Wrong title, in my opinion, placing a much needed discussion on wrong basis. Have we ever discussed of accountability of human manufacts? Haven't always we discussed about accountability of manufacts' designers and producers? "How to hold algorithms' developers accountable" should be the proper way to put it. I think that by antropomorphizing algorithms we set ourselves on a very slippery slope... Best, Enrico Il 20/11/2016 14:14, J.C. DE MARTIN ha scritto:
A View from Nicholas Diakopoulos and Sorelle Friedler
*How to Hold Algorithms Accountable* / //Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that./
November 17, 2016
Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable.
[…]
Continua qui: https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable...
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- EN ===================================================================== Prof. Enrico Nardelli Dipartimento di Matematica - Universita' di Roma "Tor Vergata" Via della Ricerca Scientifica snc - 00133 Roma tel: +39 06 7259.4204 fax: +39 06 7259.4699 mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it home page: http://www.mat.uniroma2.it/~nardelli blog: http://www.ilfattoquotidiano.it/blog/enardelli/ ===================================================================== --
totalmente d'accordo! juan carlos On 20/11/16 15:40, Enrico Nardelli wrote:
Wrong title, in my opinion, placing a much needed discussion on wrong basis.
Have we ever discussed of accountability of human manufacts?
Haven't always we discussed about accountability of manufacts' designers and producers?
"How to hold algorithms' developers accountable" should be the proper way to put it.
I think that by antropomorphizing algorithms we set ourselves on a very slippery slope...
Best, Enrico
Il 20/11/2016 14:14, J.C. DE MARTIN ha scritto:
A View from Nicholas Diakopoulos and Sorelle Friedler
*How to Hold Algorithms Accountable* / //Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that./
November 17, 2016
Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable.
[…]
Continua qui: https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable...
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- EN
===================================================================== Prof. Enrico Nardelli Dipartimento di Matematica - Universita' di Roma "Tor Vergata" Via della Ricerca Scientifica snc - 00133 Roma tel: +39 06 7259.4204 fax: +39 06 7259.4699 mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it home page: http://www.mat.uniroma2.it/~nardelli blog: http://www.ilfattoquotidiano.it/blog/enardelli/ =====================================================================
visto che si parla di "framing" scorretto, sto iniziando a percepire gli articoli di questo tipo come parte dell'agenda dalla silicon valley. perchè ogni assunzione per rendere gli algoritmi (o chi li sviluppa) più accountable, è alla fine un metodo per legittmarli, inserirli in un contesto regolamentato, renderli accettabili ad un mondo che inizia a percepire le problematiche. ma queste soluzioni proposte vanno sempre e comunque nella direzione di consolidamento di un monopolio di dati, necessari a fare training del sistema, ed a consolidare poteri che eventualmente potrebbero smettere d'essere fair, alla bisogna. Se invece la scelta non fosse "gli algoritmi devono essere responsible | accountable | fair | auditable | accurate", ma fosse io posso decidere l'algoritmo che voglio? io posso implementare il mio algoritmo, e l'algoritmo gira sui miei dispositivi, e scelgo io quali algoritmi avere, e posso modificarli se mi va. Questo sarebbe possibile se i content provider fornissero informazioni neutralizzate da ogni ordinamento (tipo "ordine cronogico in formato machine readable") e lasciassero i client decidere quale algoritmo applicare. Se i content provider usano dei valori derivati dalla massa di dati, per giudicare un contenuto, allora possono esportare questo medesimo valore e lasciare che io utente decida che algoritmo applicare. la rete ha come sua forza la decentralizzazione, l'intelligenza ai bordi della rete, l'interpolazione di diverse fonti per dare a te utente la scelta finale che più ti aggrada. Questa possibilità altro non è che l'implementazione della tua volontà. Tu sei responsabile dei valori che ritieni necessari per ottenere un output. I valori (sacrosanti) citati in questo articolo varrebbero solo per chi vuole vendere un algoritmo, o per algoritmi utilizzati in contesti pubblici. ben vengano queste proposte, ma non pensate sia l'unica salvezza per l'umanità connessa. è l'unica salvezza di un mercato centralizzatto, casomai, perchè sono dei compromessi che i 5 della "partnership for AI" sono disposti ad accettare ;) un caro saluto, C On Sun, Nov 20, 2016 at 12:47 PM, J.C. DE MARTIN <demartin@polito.it> wrote:
totalmente d'accordo!
juan carlos
On 20/11/16 15:40, Enrico Nardelli wrote:
Wrong title, in my opinion, placing a much needed discussion on wrong basis.
Have we ever discussed of accountability of human manufacts?
Haven't always we discussed about accountability of manufacts' designers and producers?
"How to hold algorithms' developers accountable" should be the proper way to put it.
I think that by antropomorphizing algorithms we set ourselves on a very slippery slope...
Best, Enrico
Il 20/11/2016 14:14, J.C. DE MARTIN ha scritto:
A View from Nicholas Diakopoulos and Sorelle Friedler
*How to Hold Algorithms Accountable* / //Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that./
November 17, 2016
Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable.
[…]
Continua qui: https://www.technologyreview.c om/s/602933/how-to-hold-algorithms-accountable/
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- EN
===================================================================== Prof. Enrico Nardelli Dipartimento di Matematica - Universita' di Roma "Tor Vergata" Via della Ricerca Scientifica snc - 00133 Roma tel: +39 06 7259.4204 fax: +39 06 7259.4699 mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it home page: http://www.mat.uniroma2.it/~nardelli blog: http://www.ilfattoquotidiano.it/blog/enardelli/ =====================================================================
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- Claudio Agosti, https://facebook.tracking.exposed - https://twitter.com/_vecna PGP mail: claudio﹫tracking・exposed - key here https://keybase.io/vecna
Assolutamente d'accordo D Quoting Enrico Nardelli <nardelli@mat.uniroma2.it>:
Wrong title, in my opinion, placing a much needed discussion on wrong basis.
Have we ever discussed of accountability of human manufacts?
Haven't always we discussed about accountability of manufacts' designers and producers?
"How to hold algorithms' developers accountable" should be the proper way to put it.
I think that by antropomorphizing algorithms we set ourselves on a very slippery slope...
Best, Enrico
Il 20/11/2016 14:14, J.C. DE MARTIN ha scritto:
A View from Nicholas Diakopoulos and Sorelle Friedler
*How to Hold Algorithms Accountable* / //Algorithmic systems have a way of making mistakes or leading to undesired consequences. Here are five principles to help technologists deal with that./
November 17, 2016
Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable.
[…]
Continua qui: https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable...
_______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- EN
===================================================================== Prof. Enrico Nardelli Dipartimento di Matematica - Universita' di Roma "Tor Vergata" Via della Ricerca Scientifica snc - 00133 Roma tel: +39 06 7259.4204 fax: +39 06 7259.4699 mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it home page: http://www.mat.uniroma2.it/~nardelli blog: http://www.ilfattoquotidiano.it/blog/enardelli/ ===================================================================== -- _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
-- Dott. Diego Latella - Senior Researcher - CNR/ISTI, Via Moruzzi 1, 56124 Pisa, IT (http:www.isti.cnr.it) FM&&T Laboratory (http://fmt.isti.cnr.it) http://www.isti.cnr.it/People/D.Latella - phone: +39 0506212982 - mob: +39 348 8283101 - fax +39 0506212040 =================== The quest for a war-free world has a basic purpose: survival. But if in the process we learn how to achieve it by love rather than by fear, by kindness rather than compulsion; if in the process we learn how to combine the essential with the enjoyable, the expedient with the benevolent, the practical with the beautiful, this will be an extra incentive to embark on this great task. Above all, remember your humanity. -- Sir Joseph Rotblat
participants (4)
-
Claudio Agosti -
Diego Latella -
Enrico Nardelli -
J.C. DE MARTIN