What Is 'Evil' to Google?
Speculations on the company's contribution to moral philosophy
Last week, another distasteful use of your personal information by Google came to light: The company plans to attach your name and likeness to advertisements delivered across its products without your permission.
As happens every time the search giant does something unseemly, Google's plan to turn its users into unwitting endorsers has inspired a new round of jabs at Google's famous slogan "Don't be evil." While Google has deemphasized the motto over time, it remains prominent in the company's corporate code of conduct, and, as a cornerstone of its 2004 Founder's IPO Letter, the motto has become an inescapable component of the company's legacy.
Famous though the slogan might be, its meaning has never been clear. In the 2004 IPO letter, founders Larry Page and Sergey Brin clarify that Google will be "a company that does good things for the world even if we forgo some short term gains." But what counts as "good things," and who constitutes "the world?" The slogan's significance has likely changed over time, but today it seems clear that we're misunderstanding what "evil" means to the company. For today's Google, evil isn't tied to malevolence or moral corruption, the customary senses of the term. Rather, it's better to understand Google's sense of evil as the disruption of its brand of (computational) progress.
Of course, Google doesn't say so in as many words; the company never defines "evil" directly. But when its executives talk about evil, they leave us clues. In a 2003 Wired profile of the company, Josh McHugh noted that while other large companies maintain lengthy corporate codes of conduct, Google's entire policy was summarized by just those three words, "Don't be evil." While there's some disagreement about its origins, Gmail creator Paul Buchheit reportedly conceived of the slogan, calling it "kind of funny" and "a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent."
In rejoinders of Google's dubious fidelity to the motto, most assume that the company was once virtuous and has either fallen from grace, or that it has been forced to compromise its values for the market. Even ten years ago, McHugh explained the situation as a side-effect of growth, explaining how difficult it was for Google to maintain a Tron-style "fight for the users" ideal in an enormously influential global information company. Others see it as a foil. In his book The Googlization of Everything, Siva Vaidhyanathan observes that the "Don't be evil" slogan "distracts us from carefully examining the effects of Google's presence and activity in our lives." True, but the slogan itself also counts as one such activity. Understanding what evil means to Google might be central to grasping its role in contemporary culture.
In an NPR interview earlier this year, former CEO and executive chairman Eric Schmidt justified the policy with a paradigmatic example:
So what happens is, I'm sitting in this meeting, and we're having this debate about an advertising product. And one of the engineers pounds his fists on the table and says, that's evil. And then the whole conversation stops, everyone goes into conniptions, and eventually we stopped the project. So it did work.
Schmidt admits that he thought it was "the stupidest rule ever" upon his arrival at the company, "because there's no book about evil except maybe, you know, the Bible or something." The contrast between the holy scripture and the engineer's fist is almost allegorical: in place of a broadly construed set of sociocultural values, Google relies instead on the edict of the engineer. That Schmidt doesn't bother describing the purportedly evil project in question only further emphasizes the matter: Whatever the product did or didn't do is irrelevant; all that matters is that Google passed judgement upon it. The system worked. But on whose behalf? Buchheit had explained that early Googlers felt that their competitors were exploiting users, but, exploitation is relative. Even back in the pre-IPO salad days of 2003, Schmidt explained "Don't be evil" via its founders' whim: "Evil is what Sergey says is evil."
All moral codes are grounded in something: a religious tradition, a philosophical doctrine, a cultural practice. Google's take on virtue doesn't reject such grounds so much as create a new one: the process of googlization itself. If anything, Google's motto seems to have largely succeeded at reframing "evil" to exclude all actions performed by Google.
Companies like Google actually embody a particular notion of progress rather than populism, one that involves advancing their technology solutions as universal ones. Evil is vicious because it inhibits this progress. If Google has made a contribution to moral philosophy, it amounts to a devout faith in its own ability to preside over virtue and vice through engineering. The unwitting result: We've not only outsourced our email hosting and office suite provisioning to Google, but also our information ethics. Practically speaking, isn't it just easier to let Google manage right and wrong?
We can already find signs of the spread of this lesser-known, engineer's sense of evil in Wiktionary, a crowdsourced dictionary run by the group that operates Wikipedia. There, the word "evil" is revealed to have acquired a domain specific meaning in computing:
evil (computing, programming, slang) undesirable; harmful; bad practice
Global variables are evil; storing processing context in object member variables allows those objects to be reused in a much more flexible way.
Wiktionary's entry is but one specimen, but it is exemplary of Google's seemingly incongruous moral behavior. Understood in the programmer's sense, "evil" practices are just counter-indicated ones. Things that might seem reasonable in the moment but which will create headaches down the line. This kind of evil is internally-focused, selfish even: it's perpetrated against the actor rather than the public. Insofar as "bad practice" evils have victims, those victims are always members of the community of its perpetrators. Like the programmer's stock rejoinder "considered harmful", a phrase originally used to rejoin uses of GOTO in BASIC, a computational evil is one committed against engineering custom or convention.
This, perhaps, is the most helpful way to understand what Google means when it vows not to be evil. As both users of its products and citizens of the world it increasingly influences and alters, we would be wise to see Google's concern for evil as a pragmatic matter rather than an ethical one. It's a self-referential pragmatism, too: "Evils" like GOTO are evil insofar as they prevent a program from being effectively created and maintained, not because they make that program act wickedly. Under this understanding of evil, the virtuous actor is one who does not hinder future action.
It is a subtly different wickedness than the kind the political theorist Hannah Arendt famously called "the banality of evil." For Arendt, evildoers like Adolph Eichmann carry out heinous acts because they accept the premises of their enterprise without question. Banal evil is an evil of bureaucracy rather than fanaticism or sociopathy.
Admittedly, there's probably some bureaucratic banality at work in the Googleplex. No large organization can avoid it. And, contra Arendt, bureaucratic evil can still be individually sociopathic; just think of the stories we've recently read about NSA agents abusing the access to information granted by the government's surveillance system to spy on love interests.
But when you consider Google's bad behavior, the choices that strike many as low are neither banal evils nor sociopathic ones. They are conducted in plain sight, as official service offerings. They are presented through magnanimity rather than savagery. When those choices seem underhanded to us, at odds with the motto "Don't be evil," they do so not because of the policies they entail, such as using your activity on the web as unauthorized endorsements for paid advertising. Those acts are par for the course, alas. All companies, particularly public ones, exist to maximize their own benefit. Google never claimed otherwise; even in 2004 "Don't be evil" mostly clarified that the company wouldn't sprint to short-term gains.
Rather, our discomfort is an expression of the dissonance between ours and Google's understandings of evil. Google has managed to pass off the pragmatic pursuit of its own ends as if it were the general avoidance of wickedness. It has invested those ends with virtue, and it has publicized the fact that anything good for Google is also good for society. This is a brazen move, and it's no wonder it takes us by surprise.
The dissonance arises from our failure to understand "evil" as a colloquialism rather than a moral harm. An evil is just a thing that will cause you trouble later on—an engineering impediment. These practical evils are also private ones. Google doesn't make immoral choices because moral choices are just choices made by Google. This conclusion is already anticipated in the 2004 IPO document, which glosses evil as the failure to do "good things." At least we're used to hearing "good" as an ambiguous term that can refer to capacity and validity as much as—and more often than—virtue.
Products and infrastructures eventually degrade, but ideas linger. This verbal frame shift might turn out to be one of Google's lasting legacies. Google Evil, you might call it: evil as counter-pragmatism, and as an official public policy. As a replacement for a moral compass.
This is what makes the whole matter seem so insidious: It's not that Google has announced its intention not to be vicious and failed to meet the bar. Nor is Google, Arendt-style, just manning its station, doing what's expected. No, through its motto Google has effectively redefined evil as a matter of unserviceability in general, and unserviceability among corporatized information services in particular. As for virtue, it's a non-issue: Google's acts are by their very nature righteous, a consequence of Google having done them. The company doesn't need to exercise any moral judgement other than whatever it will have done. The biggest risk—the greatest evil—lies in failing to engineer an effective implementation of its own vision. Don't be evil is the Silicon Valley version of Be true to yourself. It is both tautology and narcissism.
In specific matters like using your name and likeness to surreptitiously improve the company's advertising services, you can take comfort in the fact that Google has considered the matter carefully and adopted a solution on your behalf. Google already knows what's best for you more than you know anyway—it's got all your data to tell it so. And how do you thank Google for this service? By complaining about it like an ingrate, unable to see the bigger picture, even though a multitude of engineers have struck fists against tables in Mountain View to deliver desires so intimate that you can't even recognize them.
As deviant as this logic might seem, perhaps we should thank Google for being so frank about it. At least now we can ponder this strange new evil, roll it around in our heads rather than just Googling for its meaning. And after all, Google's logic is no different from that of other technology companies banging the techo-libertarian drum of freedom and progress through leveraged, privatized Internet services. The Internet industry is committed only to itself, to the belief that its principles should apply to everyone. "Don't be evil" is just another way of saying so.