Fortunately, the Department of Justice is acting on these findings and warning state governments to stop funding Scared Straight and similar programs. But Scared Straight is not the only government program that’s been shown to cause harm. The federal government’s long-running after-school program, 21st Century Community Learning Centers, has shown no effect on academic outcomes on elementary-school students—and significant increases in school suspensions and incidents requiring other forms of discipline. The Bush administration attempted to reduce funding for the program. But following impassioned testimony on behalf of the program by Arnold Schwarzenegger, then a potential candidate for governor of California, congressional appropriators agreed to restore all funding. Today the program still gets more than $1 billion a year in federal funds.
What can we do to promote moneyball in government? The first (and easiest) step is simply collecting more information on what works and what doesn’t.
The Obama administration has already pushed federal agencies to bolster their analytic capabilities and to show how their funding priorities are evidence-based, particularly in their budget submissions. As a result, the administration’s 2014 budget proposal had an unprecedented focus on evidence and results.
A nonprofit organization that advocates for evidence-based decision making, called Results for America, has proposed a number of measures that would expand on these efforts. It is calling for reserving 1 percent of program spending for evaluation: for every $99 we spend on a program to improve education, reduce crime, or bolster health, we would spend $1 making sure the program actually works.
The Harvard economist Jeffrey Liebman has written that, based on his simple but convincing calculations, “spending a few hundred million dollars more a year on evaluations could save tens of billions of dollars by teaching us which programs work and generating lessons to improve programs that don’t.” Who wouldn’t want a 100-fold return on investment?
The more evidence we have, the stronger it is; and the more systematically it is presented, the harder it will be for lawmakers to ignore. Still, linking evaluation to program funding will be tough, as both of us have seen in practice, again and again.
One thing that is essential to a more results-driven government is holding politicians accountable for their support of failing programs. Interest groups regularly rate politicians on their adherence to a particular perspective. What if we had a Moneyball Index, easily accessible to voters and the media, that rated each member of Congress on their votes to fund programs that have been shown not to work?
Even absent such public shaming, the government is taking steps in the right direction. The Department of Education’s Investing in Innovation (i3) program for improving student achievement and educator effectiveness, for instance, gives priority to projects backed by rigorous evidence of success, while still allocating a portion of its funds for promising programs willing to build evidence over time. The program originated in the rush and jumble of the Recovery Act, so it bypassed some typical congressional hurdles. But the performance mandate now built into i3’s design provides a model for how the federal government can make decisions about programs based on impact. Liebman has put forward some good ideas about how to expand upon that model. He suggests that, to start, 5 percent of the dedicated funding that’s delivered each year by the federal government to state and local governments—which includes major programs like the Community Development Block Grant and the Community Mental Health Services Block Grant—be reserved for programs that have demonstrated their worth. That share could rise over time as the evidence base expands.