“While officers raced to a recent 911 call about a man threatening his ex-girlfriend, a police operator in headquarters consulted software that scored the suspect’s potential for violence the way a bank might run a credit report. The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social-media postings,” the story begins. “It calculated his threat level as the highest of three color-coded scores: a bright red warning. The man had a firearm conviction and gang associations, so out of caution police called a negotiator. The suspect surrendered, and police said the intelligence helped them make the right call—it turned out he had a gun.”
That sounds like a good outcome in that particular case. And Fresno Police Chief Jerry Dyer is a fan of the product as one part of a larger municipal-surveillance network that’s now at his disposal. “Our officers are expected to know the unknown and see the unseen,” he told the Post. “They are making split-second decisions based on limited facts. The more you can provide in terms of intelligence and video, the more safely you can respond to calls.”
But as is often the case when police departments adopt new technology on their own initiative, rather than engaging in a sustained period of public comment and civic debate, Dyer appears to underestimate the pitfalls the technology could bring, and has adopted it without key safeguards, making problems all but certain.
One was raised at a City Council meeting:
Councilman Clinton J. Olivier, a libertarian-leaning Republican, said Beware was like something out of a dystopian science fiction novel and asked Dyer a simple question: “Could you run my threat level now?”
The scan returned Olivier as a green, but his home came back as a yellow, possibly because of someone who previously lived at his address, a police official said. “Even though it’s not me that’s the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy,” Olivier said. “That may not be fair to me.”
In fact, depending on how heavily Beware weighs social-media posts, it seems like it would be relatively easy for a hostile person to create a few fake accounts and elevate someone else’s personal threat assessment and their home address to red status.
How resistant is this proprietary-search capability against being gamed? Fresno’s police department can’t possibly know, not only because the product is relatively new, but because it won’t be told how the technology actually works.
Intrado considers the specifics a trade secret.
Opacity of that sort will make it much more difficult to evaluate the efficacy of the company’s tool. And it could easily obscure egregious civil-liberties violations. For example:
- The algorithm could assign an elevated threat level to individuals who have social-media accounts registered under names typically given to black or Hispanic people.
- It could assign an elevated threat level based on tweets or Facebook posts that offer constitutionally protected speech that criticizes police officers or police unions.
- It could disadvantage low-income people by assigning an elevated threat level to their addresses based on the behavior of past tenants in their high-turnover apartments, while richer folks in single-family homes are less often miscast.