Facebook and Falsehood
By Henry Farrell
January 15, 2017
After the election, many people blamed Facebook for spreading
partisan — and largely pro-Trump — "fake news," like Pope Francis’s
endorsement of Trump, or Hillary Clinton’s secret life-threatening
illness. The company was assailed for prioritizing user
"engagement," meaning that its algorithms probably favored juicy
fake news over other kinds of stories. Those algorithms had taken on
greater prominence since August, when Facebook fired its small team
of human beings who curated its "trending" news section, following
conservative complaints that it was biased against the right.
Initially, Facebook denied that fake news could have seriously
affected the election. But recently it announced that it was taking
action. The social-media giant said it would work with fact-checking
organizations such as Snopes and Polifact to identify problematic
news stories and flag them as disputed, so that people know that
they are questionable. It will also penalize suspect stories so that
they are less likely to appear in people’s news feeds.
In each instance — the decision to remove human editors in August
and the recent decision to use independent fact-checkers — Facebook
has said that it cannot be an arbiter of truth. It wants to portray
itself as a simple service that allows people and businesses to
network and communicate, imposing only minimal controls over what
they actually say to one another. This means that it has to
outsource its judgments on truth — either by relying on "machine
learning" or other technical approaches that might identify false
information, or by turning to users and outside authorities.
[…]
Continua qui:
http://www.chronicle.com/article/FacebookFalsehood/238867