What if you could walk through that airport body scanner, pause for the camera, and know that your naked image would never be pored over by human eyes? If it was software, not TSA screeners, who searched you and other passengers for possible explosives?
That's the vision of Transportation Security Administration head John Pistole. At a Senate hearing yesterday, Georgia Republican Johnny Isakson conjured this future and suggested to Pisole, "It looks like technology can be a solution to the privacy issue." Pistole responded, "I think so, I'm very hopeful in that regard."
Earlier in his testimony, he'd remarked, "I see us in an interim period" where the TSA was using best available technology but that target recognition software "clearly addresses the privacy issue in its entirety" and would be available soon.
How soon? "I'd like to say months, but it's all technology driven," Pistole said.
While vendors like L-3 and Rapiscan are actively trying to come up with a magic technological solution for the TSA, independent experts on body scanning technology and automated threat detection aren't nearly as optimistic as the TSA head. Setting aside the question of how much real safety would be afforded by body scanners that use algorithms to detect artfully hidden explosives under someone's clothes (I'll leave it to our big guns to debate that point), there are fundamental problems that may make it very difficult to deploy them.
Here's how they work. First, an image is obtained with an x-ray backscatter or millimeter wave machine like the 385 systems already installed in 70 airports around the country. While the two types of machines have important differences, their basic principles are comparable. The electromagnetic waves (x-rays or radio) used in the machines pass easily through clothing, but bounce back when they encounter human skin (or other denser materials). Those reflections reach the scanner and are transformed into an image of the body sans clothing.
In one of the automated threat detection systems, that image would be fed to an algorithm that would compare it to a database of other images to determine if it was suspicious. Instead of looking at an image of a person, the TSA scanners would see a stick figure that would indicate the general area where a problem existed. They would then follow up with a patdown or other screening procedure.
Unfortunately, the technological task of automated threat detection is not trivial. There are inherent problems that make an accurate machine very, very difficult to build.
The most basic problem is that an algorithm is only as good as its training data. These machines are like a massive game of memory: they compare something new with something they've seen before. In order to make accurate determinations, they need a huge library of suspicious and normal images, said the Pacific Northwest National Laboratory's Doug McMakin, who developed the technology on which the L-3 SafeView system is based.
"To see different threats, you really have to scan a lot of people and put objects on different places on the body and use different kinds of threats too," McMakin said.
Of course, we could easily generate a huge database of images from all the people walking through the scanners right this minute, but the privacy problem that would represent makes it impossible. "You can build up this huge database, but because they don't save any of the imagery, you have to go out and get people to build up this database."
Carey Rappaport, the head of the Center for Subsurface Sensing and Imaging Systems (a multi-university organization that studies automatic threat detection) and a microwave engineer at Northeastern University, agreed that automated threat detection using just this kind of imaging would be very hard. "How do you get a computer algorithm to say this fits in the parameters of what's human and this is something that is not human?" Rappaport asked. "There are a lot of things that could look naturally occurring but that are cleverly disguised explosives."
This problem is not easily sidestepped. It's built into to the detection task: it's just hard to know what you're looking for and even harder to provide a computer with a set of rules to precisely define the characteristics of something you've never seen before.
"What are you looking for? If you're looking for something that looks like a Glock or a roadrunner cartoon object with a fuse coming out of it, that's easy," Rappaport said. "Guns have to have a barrel. Knives have to have sharp edges, but an explosive can be formed into anything."
And not only is it difficult to predict the precise form of a threat, but people's bodies vary too, introducing even more complexity, Rappaport said.
Even Pistole admitted that the rate of false-positives was too high based on the TSA's own testing. Some of the L-3 ProVision Automatic Threat Detection systems have been deployed in other countries, most prominently at Amsterdam's Schipol Airport and the Hamburg Airport. At the latter, officials revealed this week that folds in clothing were creating false alarms. L-3 has not -- and does not plan to -- make their data public in a peer-reviewed journal.
Human beings are actually great at this kind of pattern detection. As Rappaport put it, even a two-year old can tell you the difference between a dog and a cat, whereas the best computer vision systems can't. There is a reason that image recognition tasks are one of the most popular assignments on cheap labor markets like Mechanical Turk.
The TSA did not officially provide a timeline for when automated threat detection might be deployed. "The current version of automated threat detection technologies do not meet TSA's detection standards," spokesperson Sarah Horowtiz wrote to me. "TSA sees automated threat detection as a viable option for the future."
Rappaport thinks that the real answer to automated threat detection will be to use "multiple modalities." So, in addition to a x-ray or backscatter scanner, there'd also be some chemical detection machine or some other type of technique. Obviously, such a system would be much more complex and take longer to develop than a few months.
Nonetheless, holding out the carrot of an automated scanner is very effective rhetoric. When Congressional representatives' hear complaints about pat-downs and body scans, they can assure their constituents that they're working on it and liberally sprinkle that statement with that magic word: technology.