The story of humanity is one of extraordinary cooperation but also terrible conflict. We come together to build cities, civilizations and cultures, but we also destroy these through violence against each other and degradation of our environment. Given that human nature is capable of both extremes, how can we design societies and institutions that help to bring out our better, more cooperative instincts?

This question is not limited to humans. Life’s domains are replete with many forms of cooperation, from microbes sharing helpful molecules to dolphins providing aid to the injured. This kind of “altruistic” behavior—helping others at one’s own expense—presents an evolutionary puzzle. As Charles Darwin put it in *The Descent of Man* (1871): “He who was ready to sacrifice his life ... rather than betray his comrades, would often leave no offspring to inherit his noble nature.” The question then becomes, what kinds of conditions lead to the evolution of cooperative behavior, when we would normally expect selfishness to prevail?

Ideas about evolution and human nature can be difficult to test in the laboratory. However, insight can come from a surprising place: mathematics. The idea is to create a mathematical model: a cartoon picture of the real world, drawn in the language of math. Mathematical analysis can then provide a kind of “instant experiment” to test an idea on its theoretical merits.

Of course, since any mathematical model excludes some features and oversimplifies others, we must be careful not to draw overly broad conclusions. History is littered with utopian ideas that looked great on paper but collapsed in practice. Still, mathematical modeling can be quite effective in separating promising ideas from those that are conceptually flawed.

Recently, I led a team of investigators to mathematically model how the structure of a society can encourage or suppress the evolution of cooperative behavior. We represented structure as a network, in which every individual is linked to a certain set of “neighbors.” Links can be strong, as in the case of a close friend or family member, or weak, as for a rarely seen acquaintance.

Individuals can cooperate, helping their neighbors at a cost to themselves, or not. This choice is an example of what game theory calls the “prisoner’s dilemma.” Each individual, if acting in pure self-interest, would choose not to cooperate. Yet cooperation by everyone leads to greater prosperity for all.

The two strategies, cooperation and noncooperation, spread through the network as individuals imitate, or learn from, their neighbors. Individuals are more likely to imitate neighbors who do better in the prisoner’s dilemma. Over time, one strategy will win out: Society will converge to a state where either everyone cooperates or no one does.

An earlier study had examined a simple case of this model, in which each individual has the same number of neighbors. They found that, for cooperation to flourish, the benefit-cost ratio of cooperation must be greater than the number of neighbors per individual. For example, if everyone has exactly five neighbors, cooperation succeeds if it provides at least five times as much benefit as the cost a cooperator pays. But while this is a beautiful result, its applicability is limited: In typical real-world networks, individuals differ widely in their number of neighbors, with some having a great many neighbors and others having very few.

We found a way to calculate whether cooperation is favored on *any *network. The key quantity is the *critical benefit-cost ratio*. If this ratio is three, for example, then any cooperative behavior providing a threefold or greater benefit is favored. We showed how to calculate the critical cost-benefit ratio of any given network by solving a system of linear equations (a mathematically straightforward task). The smaller this ratio, the easier cooperation is to achieve. But for some networks, this ratio is negative, which means that cooperation is *never* favored to evolve.

So which networks are best for promoting cooperation? Cooperation flourishes best when each individual has strong, reciprocated connections to a small number of others. In this case, cooperation spreads locally, along these connections, leading to clusters of cooperators who share benefits with each other. In contrast, if all individuals are equally connected to all others, the benefits of cooperation become diluted in the sea of noncooperators, and the behavior cannot spread. Thus, for cooperation to thrive, a few strong ties are better than a myriad of weak ones.

Humanity faces a number of unprecedented challenges. To respond to crises such as climate change, we must cooperate on a global scale. Mathematical modeling can help us design structures and institutions to make this cooperation possible. According to our model, open forums such as Twitter, in which anyone can interact with anyone else, might be great for sharing information, but terrible at promoting cooperation between users. Institutions that encourage fewer, stronger connections might have a better shot at getting individuals to work together for their common good.

This work is one step in a larger research program to identify how structures, networks and interaction patterns can promote cooperation in biology and in society. Our model includes many simplifying assumptions that must be probed and tested to determine how widely our results apply. Much more work needs to be done—on paper, on computers, in the laboratory, and especially in the real world—to understand how we can design the networks that will best empower us to meet our collective challenges. Still, our simple, abstract model suggests a remarkably intuitive principle: The success of global cooperation depends on the strength of our *local* connections.

This post appears courtesy of Aeon.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.