Nella tradizionale competizione tra Oxford e Cambridge, questo centro riprende i temi che già da qualche anno vengono analizzati dall'istituto fondato da Nick Bostrom, la Future of Humanity Institute della Oxford University.
http://www.fhi.ox.ac.uk/home

Trovo molto rilevante e interessante che i rischi esistenziali, associati non solo a influenze esterne come l'eventuale caduta di una meteora di grandi dimensioni, ma anche quelli di origine tecnologica assieme ai temi del transumanismo, sono sempre più frequentemente discussi sia in ambito accademico che anche politico/economico. L'ultimo rapporto del World Economic Forum, Globarl Risks 2013, contiene un'intera sezione sugli "X-Risks", con miglioramento cognitivo, scoperta della vita aliena e altri elementi ancora di recente propri della fantascienza.
http://reports.weforum.org/global-risks-2013/section-five/x-factors/

Sono stato molto positivamente colpito che nel liceo scientifico Natta di Bergamo, frequentato da uno dei miei figli nel percorso di filosofia l'anno scorso hanno dedicato parecchio tempo alla retorica, l'interpretazione della comunicazione scientifica e allo studio dei problemi della bioetica. Quest'anno invece affronteranno la roboetica!
http://nattabg.it/Documenti/news/files/POF2010_2011.pdf
(Vedi sezione: "Laboratorio della comunicazione scientifica e delle etiche applicate.")

David Orban
skype, twitter, linkedin, sl, etc: davidorban


On Wed, Jan 30, 2013 at 2:55 PM, J.C. DE MARTIN <demartin@polito.it> wrote:
Interessanti riflessioni sui rischi connessi allo sviluppo del digitale
(e corrispondente nuovo centro crossdisciplinare a Cambridge).

juan carlos

Opinionator -
          A Gathering of Opinion From Around the Web
The Stone January 27, 2013, 5:00 pm211 Comments

Cambridge, Cabs and Copenhagen: My Route to Existential Risk

By HUW PRICE

In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he’d been the driver, perhaps (never tell a taxi driver that you’re a philosopher!), but this was a man who has spent his career with computers.

Indeed, he’s so talented in that field that he is one of the team who made this century so, well, 21st – who got us talking to one another on video screens, the way we knew we’d be doing in the 21st century, back when I was a boy, half a century ago. For this was Jaan Tallinn, one of the team who gave us Skype. (Since then, taking him to dinner in Trinity College here in Cambridge, I’ve had colleagues queuing up to shake his hand, thanking him for keeping them in touch with distant grandchildren.)

There could be trouble when intelligence escapes the constraints of biology.

I knew of the suggestion that A.I. might be dangerous, of course. I had heard of the “singularity,” or “intelligence explosion”– roughly, the idea, originally due to the statistician I J Good (a Cambridge-trained former colleague of Alan Turing’s), that once machine intelligence reaches a certain point, it could take over its own process of improvement, perhaps exponentially, so that we humans would soon be left far behind. But I’d never met anyone who regarded it as such a pressing cause for concern – let alone anyone with their feet so firmly on the ground in the software business.

[...]

Continua qui: http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/




_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa