The push against using algorithms pretrial is relatively new, as bail reform has only recently gained steam. There are many different kinds in use: Forty jurisdictions use the PSA, others use algorithms created by state governments, and still others employ systems developed by for-profit companies or nonprofits.
No matter the algorithms’ origin, activists have questioned the way scores are generated. The PSA, for one, uses a database of some 750,000 cases from more than 300 jurisdictions to identify risk factors. To determine a person’s score, it requires specific information about them: their age, the charge in question, any other pending charges, any prior misdemeanor or felony convictions, if any of those convictions were violent, if they’ve ever failed to appear in court, and any prior prison sentences. Notably, it doesn’t require a person’s race—or their gender, education level, economic status, or neighborhood.
“We think [the concerns expressed in the July statement] are misplaced,” said Jeremy Travis, the executive vice president of criminal justice at the Laura and John Arnold Foundation, which created the PSA. He said he agrees with the groups’ worries about structural racism, but noted that algorithms don’t preclude other improvements to the pretrial system. “Risk assessment is not an impediment to reform,” Travis said. “It can be an avenue to reform.”
The inner workings of other algorithms are less clear, though—even intentionally so. In response to university researchers’ requests for details, many jurisdictions have refused to turn them over, claiming the information is owned by private companies. Perhaps the most well-known risk-assessment tool is COMPAS, which got attention in 2016 after ProPublica published an article claiming its algorithm was biased against black defendants. The for-profit company that created COMPAS denied those allegations, but as The Washington Post wrote, it “refused to disclose the details of its proprietary algorithm, making it impossible to fully assess the extent to which [its technology] may be unfair, however inadvertently.”
The algorithms’ scores aren’t the only thing that concerns activists: They also worry about how actors within the criminal-justice system interpret them. “Predictions serve some sort of purpose, and the purpose they serve is to advise a judge on what’s supposed to be done,” said Logan Koepke, a senior policy analyst at the nonprofit Upturn who studies how scores are implemented.
A judicial pact to cut court costs for the poor
He pointed to a 2017 study from George Mason University that examined Kentucky’s pretrial risk-assessment system, which was made mandatory in 2011. It found that the algorithm has led to significant changes in bail-setting practices, but only a small increase in actual pretrial release. Furthermore, the study showed that the changes eroded over time, as judges returned to their previous habits.