When employers surveil workers, it’s usually to cut costs and ensure efficiency—checking when people are clocking in and leaving, whether they’re hitting sales goals, and so on. But for police, operating efficiently is a matter of life and death, law and order. Their bosses, and the communities they serve, want to know whether they’re potentially violating someone’s rights. In the event of a shooting, they want to know how it happened. Now they have more insight than ever.
Thanks to new machine-learning tools, researchers and police departments can use behavioral data to find the earliest signs an officer may be flouting policy or at risk of shooting an unarmed civilian. To build the algorithms that may one day be able to create a sort of “risk score” for police, researchers are using familiar tools: data from police body cameras and squad cars, and the internal reports usually kept locked in police departments away from researchers, including information on officer suspensions and civilian complaints.
Of all this information, body cameras—which were purpose-built to create an objective and unaltered record of an officer’s every move on the job—may be the most valuable. At least in theory: Since the Justice Department began offering millions of dollars in grants for body cameras in 2015 and advocates began clamoring for the technology, police have claimed their cameras have fallen off, become suddenly unplugged, or exploded, their footage accidentally deleted or never filed. At the same time, civil-rights advocates’ widespread support for the devices cooled amid suspicion that police have too much discretion in when to record and when to release footage.