Why are neighborhoods so often segregated? Why did tense but stable situations in the Balkans and Rwanda suddenly tip into genocide? Why did crime rates across the country drop precipitously in the 1990s? These questions may seem unrelated, but insight into all of them has been garnered from the relatively new science of artificial societies. In "Seeing Around Corners," Atlantic correspondent Jonathan Rauch surveys the field, and argues that A-societies may become a powerful tool for peering into some historical mysteries and societal trends that have until now resisted explanation. As Rauch writes,
Researchers are creating cyber-models of ancient Indians of Colorado's Mesa Verde and Mexico's Oaxaca Valley; they are creating virtual Polynesian societies and digital mesolithic foragers; they are growing crime waves in artificial neighborhoods, price shocks in artificial financial markets, sudden changes in retirement trends among artificial Social Security recipients, and epidemics caused by bioterrorism.
What the creators of artificial societies have learned is that even by setting just a few simple rules for how human beings interact, they can create "societies" of great complexity—ones that in many ways mirror what's going on in the real world. These models imply that there are certain patterns into which human beings unconsciously arrange themselves—and the models help to identify what those patterns are. A-societies, of course, will not be able to tell us exactly when the next genocide will happen, or precisely when the next crime wave will crest. But, as Rauch points out, they may help us realize the sorts of targeted interventions that would be most effective.
Jonathan Rauch and I spoke by telephone on March 21.
In Thomas Schelling's simulation of a segregated neighborhood, people who may very well have no desire to live in a neighborhood that's all white or all black inevitably end up getting exactly that, because they want a few of their neighbors to be of the same background. In a way the study of artificial societies seems to imply that society, or certain aspects of it, is organized as much by certain immutable rules as it is by free will. Do these simulations imply that we have less control over how society organizes itself than we'd thought?
Yes, but the first thing to say is that what these models show is that our conventional either/or dichotomy between immutable rules and free will, as you put it so aptly, is wrong. In fact, there is an entire third realm in which society is shaped—in its large shapes—neither by free will nor by immutable rules. On the one hand, in Schelling's demonstrations people are making choices that they do not in any way regard as racist but that produce segregated outcomes. So plainly you don't have a free-will situation where people are trying to create a segregated society or even want to live in one. But on the other hand, it's not immutable rules either, because no two of these simulations are exactly alike. That's true even in as simple a model as Schelling's. Random events take over. Although you can generalize that the outcome will be segregated, you can never predict exactly where the neighborhoods will wind up or where particular people will be living. So it's also not immutable rules or free will, it's something different from either. It's unpredictable, self-ordered complexity. To me, that's the real revelation of these kinds of models. That we have much less control than you might expect when it comes to societies' behavior, but we have more power to predict large-scale patterns and outcomes than we might have thought.
I would think it would be a big leap to go from replicating phenomena already known about the world to predicting what could happen in the future. How much potential do A-societies have in this regard? Are you aware of any interesting things the A-societies have predicted?
The answer to the second question is easy—No. There are no real-world future predictions based on these models. In fact, this whole science is so brand new that it's just beginning to cope with the difficult job of predicting the past. As we see from the Anasazi model, even predicting the past is very difficult. What these models show is that the only way to predict the future is actually to live through it. Each of these models is different. Each run is its own reality, and no matter how powerful your computer is, or how smart you are, or how good your intuition is, knowing the rules and the starting point of a model never allows you to predict the exact outcome, and in fact rarely allows you to foresee the surprises that are likely to happen. What the models prove, then, is that we will never be able to predict the future in social science with any exactitude. What they may do is give some sense of what are the types of surprises, unexpected phenomena, that are likely to come along. That doesn't mean we know when they'll happen, or even that they'll ever happen at all. But we'll have some sense for how we may be blindsided, and where the tipping points may be out there in which small interventions can reap large rewards. That's not predicting the future, but it's way better than anything we've got right now.
Are there examples of ways that artificial societies have challenged conventional ideas of how society organizes itself?
I think that the results of this stuff, in some sense, challenge every idea of how society organizes itself, in that we tend to have an intuitive belief that if we understand individual people and ourselves and our family, then we understand society. Social scientists have been casting doubt on that for years. But these models all take it a step further, and they say that society really does have a life of its own, a mind of its own, a biography of its own. It's something that individuals can't control. Nor can we intuit the outcomes. So I think that shakes the thinking of a lot of social science, which is based on equations and straight lines and curves and projections. In some ways it's worth pointing out that this also confirms a lot of existing social science. For example, we know from experience that broken-windows policing actually works, because it worked in New York, and it's worked in a number of other places. These models are going to help show us why it works—how it is that just getting a few key offenders at a few key moments can completely reverse the course of crime. Then it no longer looks like hocus pocus—we start to get some real understanding of what's going on out there in the world.
It does seem that artificial societies could be especially effective in shaping law enforcement. What are some other ways that you think the lessons could be applied to the real world?
I think law enforcement is going to be one important field, because here's an area where by definition you've got a small police force in a very large society trying to figure out how to use its influence to stop crime. That's a perfect situation for a model like this. Another is warfare. I've not done the reporting on this, but I understand the Pentagon is deeply interested in this kind of thing, because it helps lift the so-called fog of war. In real life you never seem to know how anything ever comes out on the battlefield. But if you have an actual war inside a computer, using agents that are different from each other and follow different rules and have different incentives and so on, you may be able to get a much better sense, for instance, of where to strike, what the key targets are, and when to strike in order to have large effect with a small intervention.
One of the things that amazed everybody in the Afghan war is that it looked so terrible for so long. Weeks and weeks went by, we bombed and bombed, nothing happened, all the pundits started to say this is never going to work—we're going to have to do a large-scale ground invasion. Then all of a sudden we woke up, looked at the newspapers, and the other side had completely buckled. They'd just totally fallen apart. But it wasn't as though we had done something dramatically different—dropped a nuclear bomb on them or something. And everyone was sitting around going, That's a sudden reversal; it's completely unexpected. Well, in a linear world you're surprised by that. In a world of complexity, it looks very much like the sorts of avalanches that I write about in my article, where you drop sand on the table and the pile gets bigger and bigger, just as you expect, then all of a sudden, a few more grains of sand and the pile collapses—you get an avalanche. What's going on there? The news accounts suggested that we reached a point where every time the Taliban would mass in their pickup trucks for a charge, we would drop bombs on them and blow up their trucks one by one before they even reached the enemy lines. That only had to happen so many times before the Taliban said, Okay, we give up, we have no way to fight this. So it may be that this kind of modeling will be able to tell us where those interventions are that undermine our enemy's ability to fight. Also on the war front, I just learned the other day that Joshua Epstein and others at the Brookings Institution are now modeling a bioterrorist attack, a smallpox attack. They're still just starting to put it together, but even the very earliest models turn out to show that a step as simple as staying home if someone in your family gets smallpox can drastically reduce the spread. They've also found other surprising results that I think are likely to be helpful in figuring out the most effective and efficient way to counter an attack.
Are there certain fields that have embraced A-societies more than others? Are there other fields that could really benefit from this sort of thinking?
These methods can be applied in almost any branch of social science. Political scientists are trying to grow political parties. Anthropologists and archaeologists are growing ancient civilizations. Economists are looking at retirement behavior based on how you change people's pensions. Suppose you want people to retire early, What kind of changes do you make in pension policy? It turns out you can have cascades and avalanches and that sort of thing in pretty much any field you can name. However, in each field, it's still a very small number of people who are aware of this kind of thinking. One person I interviewed said that in political science, this kind of thinking is just on the cusp of being the next big thing. I'm told indirectly that in many fields there is a lot of resistance from tradition-minded people who do equations on blackboards and multiple-regression analyses. They think that all of this is just creating toy worlds and wind-up societies. My feeling is that resistance is in some ways appropriate, because it's that kind of skepticism which will force this new kind of science to do the hard work to really nail down its case and make itself useful.
I can see why people might be skeptical at first. It seems like you can take the data and make it result in anything you want—you mimic how a neighborhood segregates itself, or how genocide happens. If you already know where the model should end, what does the process of creating that result really tell you?
This is one of the first questions I had about it, and it's a good skeptical question to ask. If you take the conclusion that you're looking for, and then you just arrange everything in the model to give you that conclusion, what have you done? The answer is that in this new world of agent-based modeling, you've done something important, and here's why. What you're looking for is the opposite of what traditional social science tries to do. Traditional social science tries to say, Let's look at all the causes that could create this outcome, add them all up, and see if we explain it. And then we'll do a regression analysis to try to figure out how much of the outcome is attributable to which cause. So basically, they are complexifying the world by accounting for as many different variables as they possibly can to reach an outcome. That's their model of an explanation. This new kind of thinking has a completely different model of what constitutes an explanation. Their idea is, you explain something not by calculating it, but by growing it. That is to say, you don't add variables, you subtract variables. You look for the simplest possible group of rules which create the sort of social patterns that you're looking for. And by finding what the simplest rules are, you're able to throw out all the other rules. You're able to say, Look, we have a little society here of artificial people, and all we have to do to make it segregated is give them a single rule. One rule produces this outcome. That turns out to be extremely informative, even though you already know the outcome you were looking for, because it allows you to say that mild ethnocentricity is enough to explain segregation. You don't need hypotheses about racism, you don't need to assume people are full of hate, you don't need to assume that they're oppressed or that it all has to do with prices or real estate or whatever. That doesn't mean you've got the right explanation, but you have shown that the explanation can be as simple as A, B, or C, even if the outcome looks very complex. So in a sense, this inverts the traditional thinking by saying, Here's a very complex outcome. Let's see how simple the rules can be that grow this outcome spontaneously.
You write that according to Joshua Epstein's simulation of artificial genocide, the killings are spurred not by mass hysteria, but by a series of individual decisions whereby certain people responding to local conditions turn violent. How do you separate this from a state-sponsored program of genocide? In the case of Rwanda, at least, people certainly did make these individual decisions to kill, but they were also spurred to kill by the government through radio broadcasts and other methods of mass communication. Is the point of these simulations that there are certain conditions that can quickly tip into genocide and it almost doesn't matter what the trigger is for the genocide as long as these conditions are in place?
I don't think they model that, so I don't know what the rigorous Epstein-type answer would be, except that you've got to go make the model, go create the society, and see what happens. My gut answer is that this kind of thinking does indeed provide some very deep insight into why something like Rwanda would happen. You're unquestionably right— Rwanda was a state-sponsored genocide. But here's the puzzle about Rwanda, or for that matter, Nazi Germany. I think we can assume that the people who were actually devoted to the idea of genocide in either society were a pretty small minority. Most people probably didn't want to commit genocide, as evidenced by the fact that for thousands of years they hadn't done it. So the odd thing is, How is it that this fairly small group of people can get the whole society to either participate in genocide or at least to look the other way during genocide? Of course, this is the great puzzle about Nazi Germany. One answer to Nazi Germany is Joshua Goldhagen's, which is, Most of them actually were genocidal. Most of them did hate the Jews. I've always had my doubts about that. It just doesn't accord with what I've read about human beings in general and about Germans in particular, even in the thirties. They weren't really all that different from Germans in the twenties or Germans in the fifties. Same thing in Rwanda. How did the government get people to do this? Well, what artificial societies are showing is that if you get things right, changing the behavior of a fairly small number of people in the society can tip the whole society into a radically different type of behavior. And that's not assuming the society is either civilized or uncivilized, democratic or non-democratic, religious or non-religious. It's just a bunch of artificial people running around in an artificial environment. That suggests that even in a country like America, if you've got the right conditions and you figure out where these nodes of influence are and how to manipulate them, then it might be possible that as decent as we think of ourselves as being, you could start a genocide here.
The people who know about what actually happened in Rwanda say that it was an extremely centralized effort, in which for the most part people didn't want to commit genocide, but the people behind the genocide were willing to use force to do several things. First, they wiped out influential people who opposed the genocide. That had a huge amplification effect of intimidating anyone else who might want to publicly oppose the genocide. And then they went around from town to town, very systematically, showing up with militias and saying, "If you don't participate in the genocide, we're going to kill you." And then they had a propaganda machine that further amplified the message and convinced people that the outside world wouldn't intervene. When you add those things together, everyone's afraid for their own life if they don't kill somebody else, so pretty soon, everyone's killing somebody else, and before too long, you've got an enormous genocide on your hands. What would be really nice would be to figure out how to interrupt that kind of cycle—who you go after and how you go after them in order to shut down that chain reaction. The conventional wisdom at the time of the Rwandan genocide and just after is that we couldn't have stopped it without massive military intervention. Basically, put the whole country under lockdown. Well, that may be just wrong. It may be that if you can get to the right people at the right times and stop them from being able to do the things that tip the society, the society never tips. The genocide never happens.
It seems like from the models and the article that once one of these genocides has started, even if you were to put in thousands and thousands of peacekeepers, it's not going to do anything, unless perhaps those peacekeepers are very proactive in stopping the violence.
I think you just have to set up the artificial society and see what happens. But my guess is that you'd find that as is always the case with these societies, there's no one outcome—it all depends on the flow of events, because no two cases are alike. I don't think it would be very difficult to find the necessary variables to fairly reliably shut down the genocide. It doesn't mean you shut it down in every case, but in many cases if you target agents that behave in ways X,Y, and Z and put them in jail, you reduce the odds of genocide by, say, 90 percent.
You suggest toward the end of the article that the idea of society is both more concrete than conservatives might think and less malleable than liberals might hope. How might artificial societies play into liberal and conservative ideas of what government can or should do?
I think that the artificial societies require a broad overhaul of the way policymakers think, particularly liberal policymakers. Since at least the sixties, liberals have been, I think, too optimistic and too simplistic about social intervention. And they've given so-called social engineering a bad name, because they assume that societies are basically like giant people, like giant children. If you want it to do more of something, the thinking goes, you just give everybody money and say, "Do more of this," and everybody will do it, and there won't be any nasty surprises. Well, of course, the results of that have been pretty dismal. And the reason is that the intuitive model of what a society is, that it's like a giant human being, turns out to be wrong.
I think that when liberals revise their model to understand that society is complex and self-organizing, we then will have a shot at a much more effective kind of liberalism and a more effective kind of social engineering. But it requires thinking about society in a different way. It requires thinking less about heavy-handed, one-size-fits-all national interventions and thinking much harder about local, targeted interventions. On the conservative side, I think there is similar potential for radical overhaul in thinking, though perhaps not quite as deep.
The idea that we should be focused on finding the right targeted small intervention seems like it's a little more akin to conservative thinking.
Yes, there is a strain of conservative thinking that talks about the relatively greater effectiveness of localism, for example. And, in fact, one of the things that our country is based on is the Founders' understanding, right back to Jefferson and Madison, that governments that are closer to the people, meaning more local and more familiar with the local situation, are also likely to be more effective. That turns out to be true, and these models tend to back that up. Where this thinking challenges conservatives is that there is a large body of conservative thinking premised on the notion that essentially anything you do will backfire. But these models suggest that that's not necessarily true. It may be possible to get handles on things that are much less likely to backfire, and then the pragmatic case against government intervention becomes weaker. There will still, of course, be people who say the government shouldn't intervene because it violates our rights, or what have you. But the notion that government always gets it wrong will be more susceptible to question if we are able to use this kind of thinking to find interventions that are much more likely to succeed while having fewer untoward side effects.
As computing power and sophistication increases, what new directions do you expect artificial societies to be taken in?
I think one of the exciting things about writing about this is that the sky's really the limit here. Many new social science ideas are applicable in one or maybe two fields. This is a kind of thinking that is applicable in almost any field that studies any aspect of human society. It's just a question of imagination, figuring out how many ways you can apply it. Just in the course of this conversation, because of a question you asked, I thought, Katie has kind of a great idea there. Why not model an actual Rwandan genocide, instead of what Epstein does in the model in my article, where people want to kill each other and police are stopping them. Turn it upside down: People don't want to kill each other, but police are forcing them. Then see how much effort it actually takes on the part of the government to create a genocide. Then we might actually learn something about Rwanda. And that's just from our brief conversation.
Do you think about the world in a different way now that you've delved into this field? And if so, how has it changed your world view?
I've come to have much greater respect for the notion of society as an independent actor in human life. I've come to be much more suspicious of the notion that if I think I have good intuition about people, that I then have good intuitions about public policies and society. I have come to understand in a way I didn't before that it's possible at the same time for societies to be both more surprising and more orderly than I ever thought. To me a lot of what this kind of thinking has done is set out a third, unexplored continent between determinism and randomness, where in fact there are patterns in life and in society that we may now have a shot at finding. And to me, that's a real eye-opener.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.