<https://www.theguardian.com/commentisfree/2017/dec/24/facebook-google-youtub...> The basic deal offered by social media companies to their users runs like this: “We give you tools to publish whatever you want, and then we take the revenues that result from that. You get the personal satisfaction and the warm glow that comes from seeing your holiday pictures, your home movies or your cute cats online, and we bank the cash we earn from selling your data-trails and profiles to advertisers.” It’s the digital world’s equivalent of the old American south’s practice of sharecropping – a form of agriculture in which a landowner allows a tenant to use the land in return for a share of the crops produced on their plot. In the digital version, however, the virtual “landowners” differ in their degrees of generosity. Facebook gives its sharecroppers a zero share of the harvest. YouTube, in contrast, invites them to become amateur broadcasters by uploading films to its site. If it runs ads alongside these epic productions, then it shares some of the proceeds with the sharecroppers. And if said productions attract large numbers of viewers then this can be a nice little earner. The sharecropping business model has been a roaring success since 2006 (when Google bought YouTube and Facebook opened its doors to the great unwashed). But in recent times, some difficulties have emerged. First of all, the old adage that nobody ever went broke underestimating the taste of the general public was proved right. Sharecroppers discovered that fake news – ie tasteless, misleading or sensational content – stood a better chance of “going viral” (and earning more) than truthful stuff. And second, it turned out that there are an awful lot of violent, hateful, racist, misogynistic, fundamentalist sharecroppers out there. The internet, it seems, holds up a mirror to human nature, and much that we see reflected in it isn’t pretty. For a long time, the landowners of cyberspace tried to ignore this problem by inviting users to “flag” inappropriate content, which would then be reviewed at a leisurely pace. But as Isis began to master social media and the political temperature in the west hotted up, the inappropriate content problem changed from being an irritating cost centre into an existential threat. Major advertisers decided that they didn’t want their ads running alongside beheading videos, for example. And social media executives found themselves being hauled up before Congress, castigated by European politicians and threatened with dire consequences unless they cleaned up their act. [...] A few days ago, the first conference to discuss these questions was held in Los Angeles. It was convened by Sarah Roberts, a UCLA professor who has been studying online content-moderation for some years, and included speakers who had done this kind of work, and revealed interesting details like the rates of pay that contractors get: $0.02 for each image reviewed. What was more alarming, though, was testimony on the psychological impact that this kind of work can have on those who do it. “When I left MySpace,” one reported, “I didn’t shake hands for, like, three years because I figured out that people were disgusting. I just could not touch people. I was disgusted by humanity when I left there. So many of my peers, same thing. We all left with horrible views of humanity.” Welcome to the dark underbelly of our networked world. There’s no such thing as a free lunch: online “safety” comes at a price.