<https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligence-going-bad-futuristic-nightmare-real-threat-more-current>
[] , If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact – pace Mr Gibson – that lower-level but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics. This technology is a combination of machine learning and big data and it’s everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.
These corporations regard this version of “weak” AI as the biggest thing since sliced bread. The CEO of Google burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions. If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it’s legal, ethical and good for society – and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.
Fortunately there are scholars who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their 2017 report makes interesting reading. Last week saw the publication of more in the same vein – a new critique of the technology by 26 experts from six major universities, plus a number of independent thinktanks and NGOs.
Its title – The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation – says it all. The report fills a serious gap in our thinking about this stuff. We’ve heard the hype, corporate and governmental, about the wonderful things AI can supposedly do and we’ve begun to pay attention to the unintentional downsides of legitimate applications of the technology.
[ ]