But beyond calculating how quickly, skillfully, and creatively workers can do their jobs, the management approach known as Taylorism can be—and has been—“applied with equal force to all social activities,” as Taylor himself predicted. Today, Taylorism is nearly total. Increasingly sophisticated data-gathering technologies measure performance across very different domains, from how students score on high-stakes tests at school (or for that matter, how they behave in class), to what consumers purchase and for how much, to how dangerous a risk—or tempting a target—a prospective borrower is, based on whatever demographic and behavioral data the credit industry can hoover up.
In her illuminating new book, Weapons of Math Destruction, the data scientist Cathy O’Neil describes how companies, schools, and governments evaluate consumers, workers, and students based on ever more abundant data about their lives. She makes a convincing case that this reliance on algorithms has gone too far: Algorithms often fail to capture unquantifiable concepts such as workers’ motivation and care, and discriminate against the poor and others who can’t so easily game the metrics.
Basing decisions on impartial algorithms rather than subjective human appraisals would appear to prevent the incursion of favoritism, nepotism, and other biases. But as O’Neil thoughtfully observes, statistical models that measure performance have biases that arise from those of their creators. As a result, algorithms are often unfair and sometimes harmful. “Models are opinions embedded in mathematics,” she writes.
One example O’Neil raises is the “value-added” model of teacher evaluations, used controversially in New York City schools and elsewhere, which decide whether teachers get to keep their jobs based in large part on what she calls a straightforward but poor and easily fudged proxy for their overall ability: the test scores of their students. According to data from New York City, she notes, the performance ratings of the same teachers teaching the same subjects often fluctuate wildly from year to year, suggesting that “the evaluation data is practically random.” Meanwhile, harder-to-quantify qualities such as how well teachers engage their students or manage the classrooms go largely ignored. As a result, these kinds of algorithms are “overly simple, sacrificing accuracy and insight for efficiency,” O’Neil writes.
How might these flaws be addressed? O’Neil’s argument boils down to a belief that some of these algorithms aren’t good or equitable enough—yet. Ultimately, she writes, they need to be refined with sounder statistical methods or tweaked to ensure fairness and protect people from disadvantaged backgrounds.
But as serious as their shortcomings are, the widespread use of decision-making algorithms points to an even bigger problem: Even if models could be perfected, what does it mean to live in a culture that defers to data, that sorts and judges with unrelenting, unforgiving precision? This is a mentality that stems from Americans’ unabiding faith in meritocracy, that the most-talented and hardest-working should—and will—rise to the top. But such a mindset comes with a number of tradeoffs, some of which undermine American culture’s most cherished egalitarian ideals.