[Jim Manzi]

Reihan points me to Noam Scheiber’s TNR article (pdf) “Freaks and Geeks” from last year which takes on Steve Levitt and the tendency of many contemporary economists to focus on what Scheiber calls "clean identification" or “natural experiments”: situations in which it's supposedly easy to discern the causal forces in play because only a single variable of interest is believed to have changed.  What this usually boils down to is that you have some accidental (and therefore roughly randomized) division of otherwise “identical” people into two groups.  One group (the test group) participates in some program, receives some treatment, gets some benefit or whatever, and a second group (the control group) does not.  Scheiber argues that this represents an inappropriate focus on cleverness rather than substance, because it leads economists to try to work forward from a methodology of clean identification to whatever “cute” problems can be addressed in this manner, instead of working backward from labor, income and other important problems to whatever methods are required to address them.

It seems to me that Scheiber walks all around the perimeter of a fundamental point, but doesn’t quite see it. 

An important goal (and in my view, the important goal) of any useful science, social or physical, is to develop reliable, non-obvious predictive rules of the form “If X, then Y”.  We can roughly describe such findings as operationally useful discoveries of causality.  As an example, in physics, you don’t get a lot of points for the law that says “If I let go of this pencil it is very likely to fall”.  You do get to be quite famous if you figure out and demonstrate that “If I let go of this pencil and this thousand-pound ball, they will fall at about the same rate approximately governed by the following equation”.  This is the value-add of science.

The mathematical machinery of empirical economics and social science is largely an attempt to isolate causality in the complex world of human behavior.  The problem is that the data available to us almost never permits reliable isolation of causality.  Human society is so complex and our data sets are so limited that no amount of analytical sophistication is sufficient to crack this problem on a systematic basis.  Hence, to take an obvious example, nobody really knows if we are about to enter a recession in 2008.

The use of natural experiments represents an attempt to take this problem of causality seriously.  It is therefore exactly backwards to view this movement within economics as unserious.  The causality problem, however, runs deep – natural experiments are often insufficient to isolate causality because subtle differences exist between the test and control populations.  I have written a long blog post on why such effects make twin / adoption studies surprisingly unable to resolve the question of the relationship between race and IQ.  In a book review, I go into some detail about how Levitt’s “clean identification” of an abortion – crime linkage suffers from this same defect.  In my experience, it is very hard to find such analyses of natural experiments that don’t dissolve upon careful investigation.  Note that I’m not just saying that the findings from such studies are inexact, but I’m making the much stronger statement that we don’t reliably know whether the purported causal relationships exist at all.

The only partial solution is to run actual experiments.  Much of the behavioral economics movement, in fact, can be seen as the effort to conduct true controlled experiments, even if they tend to be in very narrow circumstances that then raise the problem of how we can reliably generalize them to a sufficient degree of abstraction to support large-scale policy decisions.  Larger-scale experiments – more like clinical drug trials – in which substantial numbers of people are randomly assigned to treatment and control groups for long periods of time, are the next logical step, and in my view should become an increasingly important tool for economics and social science.  Even such experiments are imperfect, but would represent the closest we could come to reliable, though provisional, knowledge.

This perspective calls upon us to take a very humble view of economics and social science.  We scratch and claw to find small, isolated and provisional causal insights in a sea of uncertainty.  The only honest answer to most important questions is usually “We don’t know”.  Said differently, the Hayekian critique of planning and scientism remains valid, and the Law of Unintended Consequences remains in force.

Without wanting to restate the (small l) libertarian program, the basic political conclusion is that we should be extremely skeptical of government-sponsored intrusions in market or social arrangements.  We need an open society with a high degree of freedom because its “waste” is required to discover what works.  Call it the Conservatism of Doubt.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.