Considering that barely one-third of Americans can name their U.S. House representative, you’d think that the size, shape, and makeup of their voting district wouldn’t matter much to them. But the issue of gerrymandering—manipulating elections by redrawing district boundaries to favor one party over another—has suddenly become a political flash point. Cases from several states are winding through the courts, and a new mathematical tool designed to detect gerrymandering has provided real hope of stamping it out forever.

There’s just one problem: This tool has serious, perhaps insurmountable flaws.

Within the past few weeks, courts in both North Carolina and Pennsylvania have declared their states’ U.S. House districts to be unconstitutional as drawn, the first time in history that courts have struck down election maps for partisan gerrymandering. (Courts have struck down racial gerrymanders before.) In October 2017, the U.S. Supreme Court also heard a case on partisan gerrymandering for Wisconsin’s state-assembly districts, and will issue a potentially landmark ruling in the next few months.

Why the flurry of attention? Democrats are mad, for one thing. Although both sides take advantage of gerrymandering when they can, various analyses have concluded that the Republican Party gained between 20 and 28 seats in the House between 2012 and 2016 due solely to gerrymandering.

More importantly for the courts, a technical advance has given challenges to gerrymanders new life. When the Supreme Court considered partisan gerrymandering cases in 2004 and 2006, it declined to throw out the maps in question, in part because the majority found the arguments against the alleged gerrymanders too speculative and hypothetical. But Justice Anthony Kennedy suggested in 2004 that if someone could develop a “workable standard”—a simple tool to measure whether gerrymandering actually occurred—he might rule differently. Political scientists and lawyers have been scheming ever since, and have recently cooked up exactly what Kennedy asked for: a statistic called the efficiency gap.

The efficiency gap measures so-called wasted votes. From a party’s point of view, a vote is wasted in one of two cases. Democrats who vote in districts that lean heavily Republican are wasting votes, since they’re supporting losing candidates. But Democrats who vote in heavily Democratic districts are also wasting votes, since every vote beyond the one that clinches the majority doesn’t contribute to the victory. (The same goes for Republican voters.) A Machiavellian party would therefore try to waste as many votes from the other side as possible, either by “packing” votes (stuffing the opposition into one district) or “cracking” votes (spreading the opposition out to dilute them). The efficiency gap measures both packing and cracking as a single number, which ranges between 0.0 and 0.5.

As a measure of gerrymandering, the efficiency gap has several advantages. It’s intuitive and easy to calculate, requiring little more than arithmetic. It’s also based on actual election results, and can therefore provide evidence of real harm. Perhaps best of all, it boils gerrymandering—an unholy mix of geometry and demographics—down to a “single tidy number,” its inventors, Nicholas Stephanopoulos, a law professor at the University of Chicago, and Eric McGhee, a political scientist at the Public Policy Institute of California, have written. Specifically, they argue that if one side has to waste 8 percent or more of its votes in state-assembly races, or loses two or more seats in congressional races, that should be flagged as a gerrymander.

For the courts, the efficiency gap looks like a godsend—exactly what Kennedy pleaded for—and the idea has won widespread praise. “The creators of the [efficiency-gap] standard did about the best possible job of creating what the courts seemed to be demanding: a single judicially manageable indicator of partisan gerrymandering,” writes Moon Duchin, a mathematician at Tufts University who heads the Metric Geometry and Gerrymandering Group. No surprise, then, that supporters believe it could be “holy grail of election-law jurisprudence.” Indeed, depending on the ruling in the Wisconsin case, where the challengers leaned heavily on the efficiency gap, the Supreme Court could well enshrine it as the standard by which to judge all partisan gerrymandering in the future, shaping election maps for generations to come.

But as Duchin and other mathematicians have shown in a flurry of recent papers, the efficiency gap is deeply flawed.

In some cases, it leads to unintuitive conclusions. For example, you’d think that a state where one party wins 60 percent of the vote and 60 percent of the seats did things right. Not so, according to the efficiency gap. If you do the math, that state would get flagged for extreme partisan gerrymandering—in favor of the losing party. Perversely, then, the easiest remedy might to be rig things so that the minority party gets even fewer seats.

Another problem is that the efficiency gap takes no account of political geography. In Wisconsin, most Democrats are concentrated in cities like Milwaukee, producing lopsided races there. To the efficiency gap, that could look like nefarious packing, when in reality it’s simple demographics. Similarly, if several nearby districts all swung toward one party in a close election year, that completely natural outcome could get flagged as cracking.

Other critiques of the efficiency gap get more technical. (Many were first posted on ArXiv.org, a preprint server where mathematicians and physicists share new work.) But they all boil down to the same thing: Elections are complicated and volatile, and no one number can capture all that. As Duchin writes, “gerrymandering is a fundamentally multidimensional problem, so it is manifestly impossible to convert that into a single number without a loss of information that is bound to produce many false positives or false negatives for gerrymandering.”

Duchin and other critics don’t dismiss the efficiency gap as worthless, just point out that it’s too simplistic to use by itself. And to be fair, when Stephanopoulos and other lawyers argued against the Wisconsin gerrymander, they laid out a far more nuanced case. Among other things, they addressed the political-geography objections. They also employed computer simulations that produced 200 random but realistic statewide maps, then determined how an election would play out in each case. According to this analysis, the current Wisconsin map favored Republicans far more heavily than any random map did, providing strong evidence of manipulation. (In response to a request for comment, Stephanopoulos pointed to a paper he and McGhee wrote that addresses criticism like Duchin’s.)

Overall, then, the people who study the efficiency gap know its limitations. The real question is whether the courts will also recognize those limits. The efficiency gap is a nice, novel tool. The danger isn’t the efficiency gap itself, but rather the temptation to look only at the efficiency gap, and make it the effective definition of partisan gerrymandering in the future. As Duchin and her colleague Mira Bernstein recently wrote, “a famous formula can take on a life of its own and this one will need to be watched closely.”


Related Video