What if you could walk through that airport body scanner, pause for the camera, and know that your naked image would never be pored over by human eyes? If it was software, not TSA screeners, who searched you and other passengers for possible explosives?
That's the vision of Transportation Security Administration head John Pistole. At a Senate hearing yesterday, Georgia Republican Johnny Isakson conjured this future and suggested to Pisole, "It looks like technology can be a solution to the privacy issue." Pistole responded, "I think so, I'm very hopeful in that regard."
Earlier in his testimony, he'd remarked, "I see us in an interim period" where the TSA was using best available technology but that target recognition software "clearly addresses the privacy issue in its entirety" and would be available soon.
How soon? "I'd like to say months, but it's all technology driven," Pistole said.
While vendors like L-3 and Rapiscan are actively trying to come up with a magic technological solution for the TSA, independent experts on body scanning technology and automated threat detection aren't nearly as optimistic as the TSA head. Setting aside the question of how much real safety would be afforded by body scanners that use algorithms to detect artfully hidden explosives under someone's clothes (I'll leave it to our big guns to debate that point), there are fundamental problems that may make it very difficult to deploy them.
Here's how they work. First, an image is obtained with an x-ray backscatter or millimeter wave machine like the 385 systems already installed in 70 airports around the country. While the two types of machines have important differences, their basic principles are comparable. The electromagnetic waves (x-rays or radio) used in the machines pass easily through clothing, but bounce back when they encounter human skin (or other denser materials). Those reflections reach the scanner and are transformed into an image of the body sans clothing.
In one of the automated threat detection systems, that image would be fed to an algorithm that would compare it to a database of other images to determine if it was suspicious. Instead of looking at an image of a person, the TSA scanners would see a stick figure that would indicate the general area where a problem existed. They would then follow up with a patdown or other screening procedure.
Unfortunately, the technological task of automated threat detection is not trivial. There are inherent problems that make an accurate machine very, very difficult to build.
The most basic problem is that an algorithm is only as good as its training data. These machines are like a massive game of memory: they compare something new with something they've seen before. In order to make accurate determinations, they need a huge library of suspicious and normal images, said the Pacific Northwest National Laboratory's Doug McMakin, who developed the technology on which the L-3 SafeView system is based.
"To see different threats, you really have to scan a lot of people and put objects on different places on the body and use different kinds of threats too," McMakin said.
Of course, we could easily generate a huge database of images from all the people walking through the scanners right this minute, but the privacy problem that would represent makes it impossible. "You can build up this huge database, but because they don't save any of the imagery, you have to go out and get people to build up this database."