A mass-shooting in Annapolis, Maryland, at the Capital Gazette yesterday killed five journalists, making it the most deadly domestic attack on the press since 9/11. Local police say a suspect in custody, Jarrod Ramos, appears to have acted alone and been motivated by retribution for a failed defamation lawsuit against the paper. As accounts of the shooting and its aftermath arrived, one detail stood out: The suspect was uncooperative after apprehension, and the county police used facial-recognition technology to identify him.
Some would celebrate the use any available technology to name an unidentified and uncooperative suspect caught in the act of a mass shooting, especially before the incident is clearly contained. But recently, complex surveillance technologies, like a service that Amazon pitched to law enforcement, have come under scrutiny. In addition, the mass-market success of DNA-collection data have made that technology’s surveillance power potential clear. This spring, the suspected Golden State Killer was arrested thanks to DNA matched to Joseph James DeAngelo on the genealogy website GEDmatch.
But the Anne Arundel County police department, which apprehended the Capital Gazette shooter, used a more mundane method for identifying the suspect: They matched an image of his face against a state database of driver’s-license and mugshot photos. Those systems have been around for years, but citizens might not even know they exist. Just as the Golden State Killer made ordinary citizens wonder what they don’t know about how DNA might identify them in unexpected ways, so the Annapolis shooting highlights an even more ordinary technology, the driver’s license, as an unexpected tool for mass-surveillance.
Initial reports suggested that the suspect had somehow obscured his fingerprints, preventing law enforcement from using them to identify him. That’s harder than it sounds. Mary Ellen O’Toole, a former FBI profiler, told CBSN anchor Anne-Marie Green that “a fingerprint examiner would still be able to get his prints from their side, the palm, and so forth.” In a press conference on Friday, Anne Arundel County police chief Timothy Altomare said that the fingerprint-identification process was simply proceeding very slowly, and for that reason authorities made use of facial-recognition instead.
Those systems have been around for a while, too. Since 2011, law enforcement has had access to smartphone-driven devices that can perform fingerprint, iris-scan, and facial identification. States have begun storing ID-card photos, and the FBI collects and aggregates those databases via partnerships with state DMVs (it does the same with civil and criminal fingerprints). As the deadline nears for the REAL ID act, which sets standards for compliant identification, more states have started storing pictures of citizen’s faces. The databases have also been used to prevent fraud, including identity theft and falsified official documents.
Altomare explained that his office used the Maryland Coordination and Analysis System, or MCAC, to identify the suspect. When searches of this kind come up, they dredge up questions about privacy and due process. Altomare admitted that the system “has come under some fire from civil libertarians,” but added that “it would have taken much longer” to identify the suspect without that system.
Now that technology makes so much information available so rapidly, the public sometimes expects law enforcement to have acted on perfect information, especially when it comes to social media in particular. On Friday morning, a CBSN reporter asked Altomare, “Shouldn’t Ramos’s social-media posts have been on the police’s radar?” According to police, Ramos had indeed made some threats “indicating violence” on social media. Altomare later acknowledged that the suspect “had a history [of threats] on the social-media platforms,” but added, “We were not aware of that history until last night. Should we have been? Sure, in a perfect world we should have been.”
Altomare added that law enforcement might have been able to do more if they’d still had access to Geofeedia, a location-based analytics company that provides analysis of social-media posts within a geographic area in real-time. Sometimes nicknamed “TweetDeck for Cops,” the service has been controversial: In 2016, the ACLU worried that the tool was being marketed to law enforcement as a way to monitor protestors. In 2017, popular social-media services like Facebook, Instagram, and Twitter stopped sending their data to Geofeedia, effectively cutting off law-enforcement from this kind of monitoring.
It’s impossible to know if Geofeedia or a similar service would have prevented the Annapolis incident. But after a deadly event like the Capital Gazette shooting, the moral and legal qualms about a service like Geofeedia clash with its possible utility. The same might be true of the MCAC—now that it seems clear Ramos acted alone, the use of facial recognition might raise understandable hackles. But at the time, the police didn’t know whether the violence was contained or not.
No matter when or how it’s used, there’s something different about facial recognition than biometric fingerprint matching, DNA, cellular triangulation, GPS discovery, or even social-media geolocation. In all those latter cases, people leave material evidence behind: traces of the unique patterns of their digits or their genetic material, or records of where their devices—and therefore possibly they themselves—had been. All of these impressions are obscured, to some extent. Fingerprints and DNA can’t be seen, but must be extracted and verified later. Cellular location, mobile IP addresses, or other means to locate smartphones or social-media posts live in the databases of the corporations that run that infrastructure.
By contrast, facial recognition relies on the most distinctive and unconcealed feature of the human form. Obscuring the face with clothing or accessories is possible, but only temporarily—and even then, a disguise could just provide another means for positive identification in surveillance footage. A fingerprint or a position in space doesn’t feel like a fundamental, conscious feature of someone’s lived identity. But their face very much does.
From a legal perspective, issues of search and seizure must address whether a subject would have a “reasonable expectation of privacy” in the materials searched or seized. Taking pictures of people in public generally isn’t violatory because people have reduced expectations of privacy in those situations. Collecting fingerprints is generally allowed, but DNA evidence is subject to additional scrutiny. One reason the ACLU gets concerned about a service like Geofeedia is that it’s not entirely clear how such collection relates to citizens’ protections from unlawful search under the Fourth Amendment. To preserve the sanctity of those protections, it is sometimes necessary to forego technologies that might seem useful in retrospect.
The Capital Gazette shooting might bring new scrutiny to a common practice that few American citizens know much about: The collection, storage, and rapid search of driver’s license and ID-card photos. When facial recognition comes up as a topic of privacy and policy debate, it often takes the tenor of science-fiction dystopia. When big tech companies like Amazon or NVidia make technologies that end up fueling state surveillance, real life starts to look like paranoia-fiction. In China, the burgeoning social-credit system aspires to total surveillance of citizens. But the reality of state oversight by facial recognition is much more ordinary, and it’s already here. Everyone who gets a driver’s license or ID card interacts with it. And then they do so again, every day, anytime they show their face in public.