Chances are, you'll snag the wrong people, and when you do, how can you tell? How do you clear suspects of crimes that haven't happened?
Pre-crime prevention is a terrible idea.
Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick's Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.
If you picked (d) you are a winner!
The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.
First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes -- which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
The second major problem with FAST is the experimental methodology being used to develop it. According to a DHS privacy impact assessment of the research, the technology is being tested in a lab setting using volunteer subjects. These volunteer participants are sorted into two groups, one of which is "explicitly instructed to carry out a disruptive act, so that the researchers and the participant (but not the experimental screeners) already know that the participant has malintent." The experimental screeners then use the results from the FAST sensors to try and identify participants with malintent. Presumably this is where that 70 percent number comes from.
The validity of this procedure is based on the assumption that volunteers who have been instructed by researchers to "have malintent" serve as a reasonable facsimile of real life terrorists in the field. This seems like quite a leap. Without actual intent to commit a terrorist act -- something these volunteers necessarily don't have -- it is likely to be difficult to have test observations that mimic the actual subtle cues a terrorist might show. It would seem that the act of instructing a volunteer to have malintent would make that intent seem acceptable within the testing conditions, thereby altering the subtle cues that a subject might exhibit. Without a legitimate sample exhibiting the actual characteristics being screened for -- a near impossible proposition for this project -- we should be extremely wary of any claimed results.
The fact is that the world is not perfectly controllable and infallible security is impossible. It will always be possible to imagine incremental gains in security by instituting increasingly invasive and opaque algorithmic screening procedures. What we should be thinking about, however, is the marginal gain in security in relation to the marginal cost. A program like FAST is doomed from the word go by a preponderance of false positives. We should ask, in a world where we are already pass through full-body scanners, take off our shoes, belts, coats and only carry 3.5 oz containers of liquid, is more stringent screening really what we need and will it make us any safer? Or will it merely brand hundreds of innocent people as potential terrorists and provide the justification of pseudo-scientific algorithmic behavioral screening to greater invasions of their privacy? In this case the cost is likely to be high, and there is little evidence that the gain will be meaningful. In fact, the results may be counter-productive as TSA and DHS staff are forced to divert their attention to weeding through the pile of falsely flagged people, instead of spending their time on more time-tested common-sense screening procedures.
Thinking statistically tells us that any project like FAST is unlikely to overcome the false-positive paradox. Thinking scientifically tells us that it is nearly impossible to get a real, meaningful sample for testing or validating such a screening program -- and as a result we shouldn't trust the sparse findings we have. And thinking about the marginal trade off we are making tells us the (possible) gain is not worth the cost. Pick your reason, FAST is a bad idea.