Prison by Algorithm

A U.S. Senate bill aims to decrease recidivism rates, likely using statistical models. Results from this kind of effort have been mixed.

Rich Pedroncelli / AP

When the Sentencing Reform and Corrections Act of 2015 was introduced in the United States Congress last year, Republican and Democratic senators backed the ambitious bill. Experts complimented its call for changes to mandatory minimums and solitary confinement and its proposal to thin the federal prison population.

Lost in the praise, however, was a section that would radically change how the Bureau of Prisons tries to prevent recidivism. A proposed program instructs the U.S. attorney general to establish what the bill calls a “post-sentencing risk- and needs-assessment system” for federal prisoners, which would assign inmates a low, moderate, or high score based on their likelihood of recidivism. “Dynamic risk factors”—including “indicators of progress and improvement, and of regression, including newly acquired skills, attitude, and behavior changes over time”—would determine the ratings.

In theory, a prisoner’s score could define critical elements of his or her incarceration. The language of the bill indicates that an inmate’s score will affect housing assignments, telephone and visitation privileges, and be consulted in making assignments to anti-recidivism programs like vocational training, faith-based programming, and drug- and alcohol-recovery classes. But it goes even further. The bill proposes effective sentence reductions for 30 days of successful participation in an approved anti-recidivism activity. Those low-risk inmates who are eligible for reduced sentences would receive 10-days credit for every 30 days of successful participation, while other inmates receive only 5 days credit.*

In some places, predictive models have successfully helped reduce recidivism in prisons. In 2008, Rhode Island instituted a system similar to the proposed federal one. Senator Sheldon Whitehouse, one of the crime bill’s most vocal supporters on the Hill, reports that his state’s program has resulted in a 17-percent reduction in its prison population and a six-percent drop in recidivism over the last eight years.

Predictive strategies are also en vogue in some law-enforcement circles. Police are using software to mine data—including arrest records, census information, social-media posts, even weather forecasts—to anticipate crime before it happens. These tactics have been implemented in cities from Santa Cruz to New York to Miami. They have been used to dispatch officers to crime-ridden street corners, monitor high-risk individuals, predict police misconduct, set bail, and, in Pennsylvania, make sentencing decisions in court.

This combination of big data and traditional policing leads officers to focus on individuals flagged as possible future criminals and allegedly high-crime neighborhoods. Police still patrol streets and knock on doors, but computers, rather than humans, decide which streets and which doors. Essentially, this is crime forecasting: Police are trying to stay ahead of potential criminals, who may or may not be about to violate the law.

These strategies appear to be somewhat effective at reducing crime. Yet studies indicate mixed results. The Chicago Police Department’s much heralded predictive strategy is beginning to unravel as the city’s murder rate skyrockets. And evidence suggests that built-in biases against ethnic minorities and the poor affect many of these programs. When ProPublica analyzed an algorithm used to predict a defendant’s chances of recidivism in Broward County, Florida, they found that black defendants were wrongly labeled as potential future criminals at nearly two times the rate of white defendants.

Like its state-level cousins, the proposed federal program has issues. A report by the Federal Public and Community Defenders points out that the new system is a “novel and untested” experiment that may suffer from the same pitfalls as other predictive methods, including racial biases and blunt, inaccurate predictions. The report goes on to say that it is an idea premised on an “unfounded assumption” that prisoners can lower their classification through good works while in prison. Static factors, such as age and prior convictions, which are commonly given great weight in similar algorithms used in state penal systems, would simply overwhelm any dynamic factors like good behavior, they argue. Good behavior is accumulated over time. A day without a fight is only as good as the next day without a fight. A prisoner’s behavior and attitude cannot outweigh a long rap sheet without considerable time and effort. Besides, recidivism programs are theoretically best directed at the prisoners who have the greatest risk of reoffending, the report continues, not at those already deemed the least likely to return to prison. In order to remedy this issue, Congress should incentivize all inmates equally, it concludes.

The bill does offer some safeguards against bias and error. For example, it requires the Department of Justice to consult experts to ensure that the risk-assessment metric follows best practices. By law, inmates would be afforded an opportunity to improve their scores every few years, depending on the length of their sentence. The overall system would also be subject to review every three years or so.

But there are constitutional issues as well. Inmates would have no avenue to challenge or appeal their score, through a court or otherwise. As written, the periodic review seems to be less of an opportunity for an inmate to plead his or her case as it is an opportunity to show evidence of progress that might reclassify them from “high-risk” to “moderate,” or “moderate” to “low.” As the FPCD report says, “The bill would violate the separation of powers, the due-process clause, and the Sixth Amendment by making all determinations and assessments against the inmate unreviewable in any forum.”

These issues have been largely ignored by the bill’s bipartisan supporters. At an introductory press conference last October, a handful of the bill’s 35 co-sponsors declared it a triumph—a sign that Congress was once again capable of working together. The Obama administration expressed support soon after. The CBO touted its cost savings in April. And a flurry of press releases followed, many calling for a vote.

Congress has not taken the bill up since the Senate Committee on the Judiciary voted to send it to the floor nearly a year ago. “The question will obviously be asked: Can you get this through the Senate?” Senator Dick Durbin said at a press conference last year. “The next question is, obviously, what’s going to happen in the House?” Durbin shook his head. “I don’t know,” he said over a chorus of nervous laughter from the senators that stood beside him. “I honestly don’t know.”

The deeper question, it seems, is about the future of law-enforcement practices. The very concept of predicting crime challenges the presumption of innocence, a central tenet of the American criminal-justice system. Forecasting is a blunt tool. Even the best software incorrectly predicts future crime at a high rate.

The proposed recidivism program in the bill threatens to deny some inmates the basic protections of American criminal law. When a person’s liberty is at stake, our legal system affords him or her the highest protection. If this bill passes, or other states embrace the push toward predictive law-enforcement tools, prisoners will suffer the consequences of crimes they have not committed. They will do so based on a statistical model, rather than the decision of a judge or a jury of their peers.

* This article has been updated to clarify that although all inmates will be eligible to participate in anti-recidivism programs, the bill mandates the use of risk scores in assigning inmates to programs, and offers rewards for participation that are linked to risk scores.