A commenter to one of Gabriel's posts made the point that it's hard for Megan's regular readers to have a sense of where each of the guest posters is coming from in general, and therefore how to see our various posts in context. This makes sense to me
For my final guest post, I'll try to provide context. Rather than going through some long framework, however, I'll give a very quick summary, and then tell a story that I hope illustrates what I'm trying to say.
In his review of my book at The New Republic, Eric Posner made a great overarching point:
The book is less interested in the RFT than in the limits of empirical knowledge. Given these limits, what attitude should we take toward government?
Just so. I summarize the thesis of Uncontrolled in the Introduction as five points:
Nonexperimental social science currently is not capable of making useful, reliable, and nonobvious predictions for the effects of most proposed policy interventions.
Social science very likely can improve its practical utility by conducting many more experiments, and should do so.
Even with such improvement, it will not be able to adjudicate most important policy debates.
Recognition of this uncertainty calls for a heavy reliance on unstructured trial-and-error progress.
The limits to the use of trial and error are established predominantly by the need for strategy and long-term vision.
What follows is a practical example drawn from experience that illustrates why I think markets, democracy and other unstructured trial-and-error is so critical to improvement and growth. Adam Smith famously used a pin factory to illustrate his theories. For my much humbler task, I'll use a more contemporary example: the invention of Software-as-a-Service.
The traditional method of installing large-scale software for major corporations is a complicated and expensive process. Engineers come out to the customer's data center and load software onto computers. Large teams of people connect this to the rest of the company's information systems, and many other people maintain it. Amazingly, the cost of installation and support is often many times greater than the cost of the software itself.
It seems quaint in a world of cloud computing, but in the early 1990s it was only visionaries who believed software companies could operate their own data centers, and simply allow customers to access the software remotely via the Internet, much as consumers can access a web site. This was a modern version of the decades-old idea of "timesharing." The innovative idea was to exploit the public Internet infrastructure in order to make it much cheaper. A series of well-funded start-ups were launched to attempt this during the dot com boom of the 1990s, but they generally failed because they were trying to force-fit both software and business methods had evolved in the heritage environment of on-premise software into this new environment.
When some friends and I started a software company in 1999, we used current software development languages and tools that were designed to allow access via the Internet. This was entirely incidental to us, since we assumed that we would ultimately install our software in the traditional manner. When we delivered a prototype to an early customer, they didn't have IT people to install it, so we allowed our customer temporary access to our software via the Internet -- that is, they could simply access it much as they would access any web site.
As they used it, two things became increasingly clear. First, this software made their company a lot of money. Second, despite this, the IT group had its own priorities, and it would be very difficult to get sufficient attention to install our software any time soon. Our customer eventually floated the idea to us of continuing to use our software via the Internet, while paying us "rent" for it. We realized that we could continue this rental arrangement indefinitely, but this would mean less up-front revenue than if we sold the software. We were running low on money, and had few options.
Our backs to the wall, we theorized that eliminating the need for installation could radically reduce costs, if we designed our company around this business model in ways that would be different than how traditional software companies were organized. Our engineering, customer support costs and so on could be much lower because we wouldn't have to support software that operated in many environments, just one. Sales and marketing could be done in a radically different, lower-cost way when selling a lower-commitment rental arrangement. We experimented with this approach with our first several customers. Eventually, we made it work, and we committed to this approach. But this decision was highly contingent: the product of chance, necessity and experimentation.
At about this same time, unbeknownst to us and others, a few dozen other disparate start-up companies were independently making the same discovery that this model could work after all. The key was to design new software that was intended from the start for this environment, and to design the business process of the software company -- how the salesforce was structured, how the product was priced, how customer support delivered, and so on -- for this new environment as well. By about 2004, the delivery of software over the Internet, by then renamed Software-as-a-Service (SaaS) by industry analysts, was clearly a feasible business model.
The SaaS model is now seizing large-scale market share from traditional software delivery. Industry analysts estimate SaaS is growing six times faster than traditional software, and that 85 percent of new software firms coming to market are built around SaaS.
Many things about our company turned out differently than we had expected. Settling on the SaaS delivery method was just one example of this, and in fact was not even the most central -- it is just a simple one to explain. The Hayekian knowledge problem is not a mere abstraction. Our innovations that have driven the greatest economic value uniformly arose from iterative collaboration between ourselves and our customers to find new solutions to hard problems. Neither thinking through a chain of logic in a conference room, nor simply "listening to our customers," nor taking guidance from analysts distant from the actual problem ever did this. External analysis can be useful for rapidly coming up to speed on an unfamiliar topic, or for understanding a relatively static business environment. But at the creative frontier of the economy, and at the moment of innovation, insight is inseparable from action. Only later do analysts look back, observe what happened, and seek to collate this into categories, abstractions and patterns.
More generally, innovation appears to be built upon the kind of trial-and-error learning mediated by markets. It requires that we allow people to do things that seem stupid to most informed observers -- even though we know that most of these would-be innovators will in fact fail. This is premised on epistemic humility. We should not unduly restrain experimentation, because we are not sure we are right about very much. In order for such a system to avoid becoming completely undisciplined, however, we must not prop up failed experiments. And in order to induce people to take such risks, we must not unduly restrict huge rewards for success, even as we recognize that luck plays a role in success and failure.
This is why attempts to plan is out, control or channel it won't "work". That is, they might work to tame it and make it more palatable to the real human beings with whom we are in society, but the idea that we get a free lunch and can plan the evolution of the society so as to have less uncertainty and more growth is mostly a fantasy.
This sets up something I explore at length in the book: the fundamental tension between innovation and cohesion, which I see as the key trade-off that has undergirded most of key political economy debates of at least the last 30 years. I make some limited suggestions that I believe could slightly alleviate it.
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.