In 16 “undisclosed locations” across northern Los Angeles, digital eyes watch the public. These aren’t ordinary police-surveillance cameras; these cameras are looking at your face. Using facial-recognition software, the cameras can recognize individuals from up to 600 feet away. The faces they collect are then compared, in real-time, against “hot lists” of people suspected of gang activity or having an open arrest warrant.
Considering arrest and incarceration rates across L.A., chances are high that those hot lists disproportionately implicate African Americans. And recent research suggests that the algorithms behind facial-recognition technology may perform worse on precisely this demographic. Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes. And though this technology is being rolled out by law enforcement across the country, little is being done to explore—or correct—for the bias.
State and local police began using facial recognition in the early 2000s. The early systems were notoriously unreliable, but today law-enforcement agencies in Chicago, Dallas, West Virginia, and elsewhere have acquired or are actively considering more sophisticated surveillance camera systems. Some of these systems can capture the faces of passersby and identify them in real-time. Sheriff’s departments across Florida and Southern California have been outfitted with smartphone or tablet facial recognition systems that can be used to run drivers and pedestrians against mug shot databases. In fact, Florida and several other states enroll every driver’s license photo in their facial recognition databases. Now, with the click of a button, many police departments can identify a suspect caught committing a crime on camera, verify the identity of a driver who does not produce a license, or search a state driver’s license database for suspected fugitives.