<https://www.theguardian.com/commentisfree/2019/feb/17/machines-not-our-masters-but-sinister-side-ai-demands-smart-response>

[...]

Last week, OpenAI, a US-based nonprofit organisation, decided that its new AI model, GPT2, is so good at generating articles from just a few words and phrases that the model could be not be released until OpenAI better understood how it might be used. GPT2’s database is so sophisticated and its algorithms so smart that it has gone far beyond text prediction to writing readable and plausible text.

The good news is that this means that computers, for example, could speed-read articles on our behalf, summarise them and answer our questions. Those chatbots that come to your aid when you are lost on some website could become really helpful. If you wear a watch that monitors your health, GPT2 could spell out warnings and diagnoses quickly in plain English. Interesting books in foreign languages will be translated more effectively.

The bad news is that in the wrong hands the likes of GPT2 could flood every social media site on a grand scale with fake news or troubling comment. In one test carried out by the US news and technology network the Verge, which was given access to GPT2, a text prompt such as “Jews control the media” led to the following: “They control the universities. They control the world economy. How is this done? Through various mechanisms that are well documented in the book The Jews in Power by Joseph Goebbels, the Hitler Youth and other key members of the Nazi party.”
[...]