I have been thinking lately of the minister and novelist Frederick Buechner, who recounts once in a book that, in the middle of his morning routine, bleary-eyed and sleep-drunk, he sometimes looks up from the sink and into the mirror.
“What bothers me is simply the everlasting sameness of my face,” he writes. “Those eyes, that nose, that mouth—the variations of expression they’re capable of is really so restricted. The grimmest human tragedy can furrow the brow little more than the momentary pain of the dentist's drill.”
He thinks of his family, his friends, all the people in his life who know him in large part through his countenance. “I am forced to conclude,” he says, “that to an alarming degree I am my face.”
Facebook has also been thinking about faces. Last summer, the company’s artificial intelligence team announced that its facial-recognition software passed key tests with near human-level accuracy. Last week, it presented a further development: Yann LeCun, the AI team’s director, boasted that a different algorithm could identify people 83 percent of the time even if their faces were not in the picture. The program instead works from a person’s hairdo, posture, and body type.
Buechner’s statement is phrased generally, but it’s no less profound in the domain of computer security and identity. We are our faces, in a way we are not our Twitter profiles, social-security numbers, or even legal names. Although vast amounts of data are collected about most Internet users, they’re tied to what are essentially bureaucratic identifiers, like browser cookies or email addresses. Almost everything that represents me online is ultimately a jumble of numbers and letters, and nearly all of it—with some cost or sacrifice—can be changed. Even victims of fraud or domestic violence can apply to the government for a new social-security number.
A face, though—that’s different. We’re stuck with our faces. It’s prohibitively expensive to change them beyond recognition, if it’s even possible. Facial recognition and other biometrics bind data about us to us like nothing else. And, once corporate metadata can recognize and glom onto our bodies—in all their “everlasting sameness”—we can never escape that link.
So what’s to be done? In 2014, the U.S. Department of Commerce held talks about how and whether facial-recognition technology should be regulated. The talks, officially called the “privacy multi-stakeholder process,” were convened by the National Telecommunications and Information Association (NTIA), the government agency that advises the president on technology policy. The negotiations included representatives from both consumer-advocacy groups and the tech industry.
The talks are still ongoing, but they no longer include the consumer advocates. User privacy groups, including the EFF and the Consumer Federation of America, walked out in June over what they said was industry obstinance. The industry and its lobbyists, they said, would not admit that users might want to consent to facial-recognition software in the most extreme instances imaginable, so it was no use participating in the talks. As they walked out, the press promptly rushed in—perhaps because the failure of negotiations about digital privacy sounds foreboding and science-fictional.
Reading some of the coverage, I wondered how much the talks really meant. I also wondered how good facial-recognition software actually is. Have algorithms as good as the Facebook AI team’s debuted in the wild? If not, how long do we have until they are?
Even more intriguing was the sticking part of the negotiations. For what consumer advocacy groups and industry representatives in the talks could not agree on, above all, was consent. How would regulation of facial-recognition software even work? If facial recognition ever gets really good, who will own our faces?
There are multiple kinds of facial recognition. The first is the most benign: It’s called face detection, and it’s the software in phone cameras that says, “Hey, here’s a face,” then (often) auto-focuses the lens on it. The second is facial characterization, which discerns the demographics of a face: It sees not a generic human but a white male in his early thirties. (It’s this type of software that powers “smart billboards,” like the German video screen that only shows a beer ad when women walk by.)
Other types of facial recognition have more serious implications. Some software uses facial recognition for verification purposes, activating a laptop or phone only when the camera sees an approved visage. But the gravest of all is when facial recognition is used to identify an unknown person—when a database connects a stranger’s face to a name.
It’s this kind of facial recognition that people worry about most and over which government-hosted talks were severed—but more on that later. What’s important to understand first is that even this most serious form of facial recognition has two different varieties. There’s an online, computer-augmented form: Can software identify you from a picture uploaded to Facebook? And there’s an offline, “in the wild” form: Can software identify you by taking your picture while you’re just walking around?
The consensus among privacy experts is that companies’ ability to use facial recognition online far exceeds their ability to use it offline right now. The Facebook AI team’s vaunted algorithm, for instance, specializes in photos online. It knows whether two images depict the same person 97.25 percent of the time. On average, humans score 97.5 percent on the same test.
Offline, companies claim to have facial recognition figured out, but the evidence is more scant. An Israeli company called Face Six says that 30 churches around the world use its facial-recognition software, Churchix, to track parishioners, but refused to specify to a Fusion reporter which ones they are—which renders its own assertions unverifiable. Another company, FaceFirst, boasts that it can identify known shoplifters as they enter a store.
But you don’t need to be Facebook to have access to powerful facial-recognition technology. In a 2014 study, a Carnegie Mellon professor named Alessandro Acquisti found that a statistically significant number of users on an anonymous dating site could be identified by running their profile pictures through, basically, a reverse Google Image search. In a second experiment, he found too that about one-third of people walking around a college campus could be identified by name just by comparing webcam-captured images of them to Facebook profile pictures. Furthermore, he could successfully and algorithmically predict the interests and social-security numbers of some of the people in this latter experiment.
In other words, facial-recognition software achieved by jury-rigging Facebook, a webcam, and Google Image Search together works successfully, in the wild, about 33 percent of the time.
In an email, Acquisti said that three main challenges limit facial recognition’s effectiveness for now. First, there are no databases of people’s faces large enough to identify the random person on an American street, he said. Second, databases that are large enough to be useful tend to trip up computers and result in many “false positives” where a subject is incorrectly identified. Finally, computers just aren’t fast enough yet. “Even using cloud computing, comparing one face to billions of potential target faces is going to take time,” he told me.
But he said that none of these challenges could hold back facial recognition forever.
“Consider this: None of the challenges I just mentioned is ‘systemic.’ What I mean is that, over time, research has been (and will keep) overcoming each and all of them,” he said in an email. Social networks are growing their databases of people’s faces everyday, and algorithms are getting better at distinguishing between similar faces. Computing, too, seems to forever be getting cheaper.
In other words, no huge technological advance needs to happen for facial recognition to get much better. Acquisti said that a breakthrough would likely occur when facial-recognition technology could be combined with other social-network-provided metadata, such as location, gender, height, and IP address. That would reduce the scope of the task, changing it from ‘identify this random stranger’ to ‘identify this skinny, 5′11″ man on this street in this U.S. city around this time of day.’
“From a technological perspective, the ability to successfully conduct mass-scale facial recognition in the wild seems inevitable,” Acquisti told me. “Whether as a society we will accept that technology, however, is another story.”
“Let’s say someone is walking down the street,” said Alvaro Bedoya, the executive director of the Georgetown Center on Privacy and Technology and one of the consumer-privacy leaders in the talks. “Shouldn’t a company that has no relationship with that person have to ask for that person’s consent before identifying them?”
This, Bedoya said, is what negotiators were arguing about when consumer advocates decided to abandon the government-hosted talks. Though the talks were rarely productive, he said, it was industry unwillingness to admit that consent was possibly needed even in this one specific case that led consumer advocates to walk out. Bedoya had helped arrange and push for the NTIA talks when he served as chief legal counsel to Senator Al Franken.
A spokeswoman for NTIA told me the agency is “disappointed” the talks fell apart, and that they will continue to hold meetings about the issue. Carl Szabo, a policy counsel at the e-commerce trade association NetChoice, is still participating in those talks on the industry side. He implied that he believed consent would not be needed in many situations.
“You, I, everyone has the right to take photographs in public,” he told Fusion. “Facial recognition can be applied immediately, or days later, or months later. If someone takes a photograph in public, and wants to apply facial recognition, should they really need to get consent in advance? Are they going to chase someone down the street to get them to fill out a form?”
I’m struck by consent being the issue here. Consent as a virtue seems to arise from respect for personal autonomy and, even below that, from the Golden Rule, the moral maxim so universal that almost no ethical tradition omits it. The idea’s also just core to the concept of a contract: Two parties agree about what they will do for each other before they do it.
In practice, technology companies are split on whether users should get to give permission before being facially recognized. Facebook and its new Moments app require users to opt themselves out of the company’s faceprint database, meaning they are included by default. Microsoft, on the other hand, asks for user consent before subjecting them to facial recognition, according to a spokesman.
Google says that it doesn’t use facial recognition on consumer photo products at all and instead opts for “face clustering,” which only groups similar faces together on a local phone or computer. Users can also turn that feature off. In 2013, the company also banned facial-recognition apps on Google Glass.
And consumer-advocacy groups say they were ready to accept that consent is not important in all contexts. Bedoya told me that advocates were not hoping it would become mandatory for “security” settings, for instance, such as where a store might be able to identify a shoplifter during or after a crime. In the final negotiating session, he said, advocates posed on the most extreme case of all: Should a company need consent from a stranger on the street before using facial recognition on them?
Szabo said NetChoice had no position on this issue. But he said in an email that requiring consent before every use of the technology would “create universal complexities that would eliminate many of the benefits of facial recognition.” He then gave examples of some of these complexities:
Would a store need to get opt-in consent from a shoplifter before using facial-recognition technology? Should police get opt-in consent from a missing child before using this technology to find them? And should we have to get opt-in consent from every friend and family member before we tag him or her in our own photos?
It is worth noting that some of Szabo’s hypotheticals have little to do with the consumer privacy proposal. Szabo’s first question concerns a security application of the technology—whether a shoplifter can be facially identified—that Bedoya and other advocates say was explicitly exempt from the consent requirement. His second question is about what police can do with facial-recognition technology, even though it is private individuals and companies who would be limited by any proposed consumer regulation. (And regardless, as a legal minor, the missing child’s ability to consent to facial recognition would be delegated to that child’s parents.)
First Amendment and consumer privacy experts also disagreed that a right to use facial-recognition software on someone flows naturally from “a right to take photographs in public,” as Szabo seemed to imply to Fusion.
“It is well established that an entity may be able to lawfully photograph a person on a public street, but not be able to use the photo for advertising or trade purposes without the person’s consent,” said Anita LaFrance Allen, a professor of law and philosophy at the University of Pennsylvania. “This is state law in New York, California, and most other states.”
“There is no First Amendment right to take photographs whenever and however a person might like,” said Daniel Solove, a law professor at George Washington University and the CEO of TeachPrivacy, a security-education firm. “Facial recognition is not taking photos for any expressive purpose, but to use to identify a person for many potential purposes.”
Saying facial recognition deserves First Amendment protection would mean that nearly any kind of sensor or device that captures data also deserves First Amendment protection. “What about a device that captured people’s naked images underneath their clothes?” Solove asked, adding: “The law provides extensive regulation of personal data about individuals.”
But back to the consent issue. If facial-recognition cameras can be debuted anywhere—if “mass-scale facial recognition in the wild seems inevitable”—how will companies secure consent? Will suburban malls or city thoroughfares soon be filled with bustling cam-bots, snapping pictures of passerby and then rushing after them with a legal consent form, blue pen raised high in the air?
Probably not. “There’s a million different ways you could get consent,” Bedoya told me.
Online, especially, he believes it would be easy. “When a company asks for your location, you get a little popup that says, ‘Hey can we get your location?’,” he said. “When Google and Microsoft try to opt you into facial recognition, they serve you the same kind of dialogue I believe.”
For offline contexts, Bedoya said, there would be other options. A store which wanted to scan your face when you entered could register you through a web page, he said. Even if it scanned the faces of everyone entering a retail location, the store could secure consent to identify only those who opted into its VIP customer program online. VIP customers would then be assigned a personal assistant when they walked in.
He also imagined a society-wide, opt-out program. His students at Georgetown, working with engineers at MIT, theorized a program where consumers could sign up for a “Do Not Tag” program at the local DMV, right when they get their picture taken. The program would work something like an organ donor program, he said, except it would create a large list of “blacklisted” faces unassociated with names.
“Anyone in the state who’s using facial recognition would have to run their database against the opted-out database, and if there’s a face match, they’d have to drop that person out,” he said.
Could regulation be passed to enforce consent? Congress has not passed new consumer privacy legislation since 2009. In that time, California has passed 27 new laws to protect consumers, Bedoya said. In the past decade, too, both Illinois and Texas have passed laws requiring consent before biometric identification. (They were signed by Rick Perry and Rod Blagojevich, respectively.) And one path to nationwide regulation would be for more states to pass their own biometric laws, which would essentially constrain companies—especially those working online—in the United States.
“I don’t think people should throw their hands up,” Bedoya told me. “It is industry best practice to get opt-in consent, it’s just that industry lobbyists in D.C. have decided to take a much more hardline position.”
In an email, Szabo said that NetChoice believes transparency, not regulation, is how to best mitigate facial recognition.
“We are in favor of retail stores providing clear notice of facial recognition use and how that data is being collected and used,” he told me. “If a business does something that is not appealing, consumers will respond and the practice will be abandoned or the company will lose money until they fix the issue.”
He also lamented that “some privacy groups” had left the process. “We hope that these groups will return to the table. The absence of some stakeholders from NTIA’s process won’t stop us from trying to create a workable code of conduct for facial-recognition privacy,” he said.
Microsoft had a similar refrain, though it said it would commit to more than transparency: “We believe the stakeholder process is important and that is why we are participating. Should there be a consensus that an opt-in approach be adopted, that is something that we could support.”
And what about the government? The FBI’s facial-recognition database includes 52 million faces and up to one-third of Americans. Despite this size, it likely lags behind commercial databases: For one, it works mostly off a single mugshot for each subject, and facial-recognition software improves for every additional image of the face it has. Instead of a single match, too, it supplies a list of 50 “top candidates,” and Techdirt has estimated even this list has only 80 percent accuracy.
But the EFF frets about the melding of commercial and governmental resources on the issue. “Several years ago, in response to a FOIA request, we learned the FBI’s standard warrant to social media companies like Facebook seeks copies of all images you upload, along with all images you’re tagged in. In the future, we may see the FBI seeking access to the underlying face recognition data instead,” said Jennifer Lynch, an attorney for the EFF, in a statement announcing its withdrawal from the NTIA talks.
It’s easy to imagine facial recognition deployed against high-profile criminals: spies, fugitives, assassins. But what’s striking to me is that we’re not legally far off from using facial recognition to catch people breaking ordinances and committing misdemeanors. Many U.S. states, in fact, have already embraced this kind of enforcement.
After all, traffic-camera systems that detect a speeding vehicle, scan its license plate, and mail the owner of the car a ticket already serve as this kind of robocop. They enforce the law algorithmically and without discretion. If we’re okay with legal enforcement of speed laws, would we be okay with a city installing facial-recognition systems in a “high-crime area” and automatically mailing a ticket to the home of every jaywalker or loiterer?
Which is one reason it’s so important to establish guidelines about these techniques as they apply to businesses. For, under today’s law, “those eyes, that nose, that mouth”—all the unmutables that make you look like you—are not only yours to consider, not only yours to track, and not only yours to sell.