It used to be simpler. According to the traditional view, a single, long-term-planning self—a you—battles against passions, compulsions, impulses, and addictions. We have no problem choosing, as individuals or as a society, who should win, because only one interest is at stake—one person is at war with his or her desires. And while knowing the right thing to do can be terribly difficult, the decision is still based on the rational thoughts of a rational being.
Seeing things this way means we are often mistaken about what makes us happy. Consider again what happens when we have children. Pretty much no matter how you test it, children make us less happy. The evidence isn’t just from diary studies; surveys of marital satisfaction show that couples tend to start off happy, get less happy when they have kids, and become happy again only once the kids leave the house. As the psychologist Daniel Gilbert puts it, “Despite what we read in the popular press, the only known symptom of ‘empty-nest syndrome’ is increased smiling.” So why do people believe that children give them so much pleasure? Gilbert sees it as an illusion, a failure of affective forecasting. Society’s needs are served when people believe that having children is a good thing, so we are deluged with images and stories about how wonderful kids are. We think they make us happy, though they actually don’t.
The theory of multiple selves offers a different perspective. If struggles over happiness involve clashes between distinct internal selves, we can no longer be so sure that our conflicting judgments over time reflect irrationality or error. There is no inconsistency between someone’s anxiously hiking through the Amazon wishing she were home in a warm bath and, weeks later, feeling good about being the sort of adventurous soul who goes into the rain forest. In an important sense, the person in the Amazon is not the same person as the one back home safely recalling the experience, just as the person who honestly believes that his children are the great joy in his life might not be the same person who finds them terribly annoying when he’s actually with them.
Even if each of us is a community, all the members shouldn’t get equal say. Some members are best thought of as small-minded children—and we don’t give 6-year-olds the right to vote. Just as in society, the adults within us have the right—indeed, the obligation—to rein in the children. In fact, talk of “children” versus “adults” within an individual isn’t only a metaphor; one reason to favor the longer-term self is that it really is older and more experienced. We typically spend more of our lives not wanting to snort coke, smoke, or overeat than we spend wanting to do these things; this means that the long-term self has more time to reflect. It is less selfish; it talks to other people, reads books, and so on. And it tries to control the short-term selves. It joins Alcoholics Anonymous, buys the runaway clock, and sees the therapist. As Jon Elster observes, the long-term, sober self is a truer self, because it tries to bind the short-term, drunk self. The long-term, sober self is the adult.
Governments and businesses, recognizing these tendencies, have started offering self-binding schemes. Thousands of compulsive gamblers in Missouri have chosen to sign contracts stating that if they ever enter a casino, anything they win will be confiscated by the state, and they could be arrested. Some of my colleagues at Yale have developed an online service whereby you set a goal and agree to put up a certain amount of money to try to ensure that you meet it. If you succeed, you pay nothing; if you fail, the money is given to charity—or, in a clever twist, to an organization you oppose. A liberal trying to lose a pound a week, for instance, can punish herself for missing her goal by having $100 donated to the George W. Bush Presidential Library.
The natural extension of this type of self-binding is what the economist Richard Thaler and the legal scholar Cass Sunstein describe as “libertarian paternalism”—a movement to engineer situations so that people retain their choices (the libertarian part), but in such a way that these choices are biased to favor people’s better selves (the paternalism part). For instance, many people fail to save enough money for the future; they find it too confusing or onerous to choose a retirement plan. Thaler and Sunstein suggest that the default be switched so that employees would automatically be enrolled in a savings plan, and would have to take action to opt out. A second example concerns the process of organ donation. When asked, most Americans say that they would wish to donate their organs if they were to become brain-dead from an accident—but only about half actually have their driver’s license marked for donation, or carry an organ-donor card. Thaler and Sunstein have discussed a different idea: people could easily opt out of being a donor, but if they do nothing, they are assumed to consent. Such proposals are not merely academic musings; they are starting to influence law and policy, and might do so increasingly in the future. Both Thaler and Sunstein act as advisers to politicians and policy makers, most notably Barack Obama.
So what’s not to like? There is a real appeal to anything that makes self-binding easier. As I write this article, I’m using a program that disables my network connections for a selected amount of time and does not allow me to switch them back on, thereby forcing me to actually write instead of checking my e-mail or reading blogs. A harsher (and more expensive) method, advised by the author of a self-help book, is to remove your Internet cable and FedEx it to yourself—guaranteeing a day without online distractions. One can also chemically boost the long-term self through drugs such as Adderall, which improves concentration and focus. The journalist Joshua Foer describes how it enabled him to write for hour-long chunks, far longer than he was usually capable of: “The part of my brain that makes me curious about whether I have new e-mails in my inbox apparently shut down.”
It’s more controversial, of course, when someone else does the binding. I wouldn’t be very happy if my department chair forced me to take Adderall, or if the government fined me for being overweight and not trying to slim down (as Alabama is planning to do to some state employees). But some “other-binding” already exists—think of the mandatory waiting periods for getting a divorce or buying a gun. You are not prevented from eventually taking these actions, but you are forced to think them over, giving the contemplative self the chance to override the impulsive self. And since governments and businesses are constantly asking people to make choices (about precisely such things as whether to be an organ donor), they inevitably have to provide a default option. If decisions have to be made, why not structure them to be in individuals’ and society’s best interests?
The main problem with all of this is that the long-term self is not always right. Sometimes the short-term self should not be bound. Of course, most addictions are well worth getting rid of. When a mother becomes addicted to cocaine, the pleasure from the drug seems to hijack the neural system that would otherwise be devoted to bonding with her baby. It obviously makes sense here to bind the drug user, the short-term self. On the other hand, from a neural and psychological standpoint, a mother’s love for her baby can also be seen as an addiction. But here binding would be strange and immoral; this addiction is a good one. Someone who becomes morbidly obese needs to do more self-binding, but an obsessive dieter might need to do less. We think one way about someone who gives up Internet porn to spend time building houses for the poor, and another way entirely about someone who successfully thwarts his short-term desire to play with his children so that he can devote more energy to making his second million. The long-term, contemplative self should not always win.
This is particularly true when it comes to morality. Many cruel acts are perpetrated by people who can’t or don’t control their short-term impulses or who act in certain ways—such as getting drunk—that lead to a dampening of the contemplative self. But evil acts are also committed by smart people who adopt carefully thought-out belief systems that allow them to ignore their more morally astute gut feelings. Many slave owners were rational men who used their intelligence to defend slavery, arguing that the institution was in the best interests of those who were enslaved, and that it was grounded in scripture: Africans were the descendants of Ham, condemned by God to be “servants unto servants.” Terrorist acts such as suicide bombings are not typically carried out in an emotional frenzy; they are the consequences of deeply held belief systems and long-term deliberative planning. One of the grimmest examples of rationality gone bad can be found in the psychiatrist Robert Jay Lifton’s discussion of Nazi doctors. These men acted purposefully for years to distance themselves from their emotions, creating what Lifton describes as an “Auschwitz self” that enabled them to prevent any normal, unschooled human kindness from interfering with their jobs.
I wouldn’t want to live next door to someone whose behavior was dominated by his short-term selves, and I wouldn’t want to be such a person, either. But there is also something wrong with people who go too far in the other direction. We benefit, intellectually and personally, from the interplay between different selves, from the balance between long-term contemplation and short-term impulse. We should be wary about tipping the scales too far. The community of selves shouldn’t be a democracy, but it shouldn’t be a dictatorship, either.