100,000 false positives for every real terrorist: Why anti-terror
algorithms don’t work
by Timme Bisgaard Munk
First Monday, Volume 22, Number 9 - 4 September 2017
http://journals.uic.edu/ojs/index.php/fm/article/view/7126/6522
doi: http://dx.doi.org/10.5210/fm.v22i19.7126
Abstract
Can terrorist attacks be predicted and prevented using
classification algorithms? Can predictive analytics see the hidden
patterns and data tracks in the planning of terrorist acts?
According to a number of IT firms that now offer programs to predict
terrorism using predictive analytics, the answer is yes. According
to scientific and application-oriented literature, however, these
programs raise a number of practical, statistical and recursive
problems. In a literature review and discussion, this paper examines
specific problems involved in predicting terrorism. The problems
include the opportunity cost of false positives/false negatives, the
statistical quality of the prediction and the self-reinforcing,
corrupting recursive effects of predictive analytics, since the
method lacks an inner meta-model for its own learning- and
pattern-dependent adaptation. The conclusion is algorithms don’t
work for detecting terrorism and is ineffective, risky and
inappropriate, with potentially 100,000 false positives for every
real terrorist that the algorithm finds.