It is a subtly different wickedness than the kind the political theorist Hannah Arendt famously called "the banality of evil." For Arendt, evildoers like Adolph Eichmann carry out heinous acts because they accept the premises of their enterprise without question. Banal evil is an evil of bureaucracy rather than fanaticism or sociopathy.
Admittedly, there's probably some bureaucratic banality at work in the Googleplex. No large organization can avoid it. And, contra Arendt, bureaucratic evil can still be individually sociopathic; just think of the stories we've recently read about NSA agents abusing the access to information granted by the government's surveillance system to spy on love interests.
But when you consider Google's bad behavior, the choices that strike many as low are neither banal evils nor sociopathic ones. They are conducted in plain sight, as official service offerings. They are presented through magnanimity rather than savagery. When those choices seem underhanded to us, at odds with the motto "Don't be evil," they do so not because of the policies they entail, such as using your activity on the web as unauthorized endorsements for paid advertising. Those acts are par for the course, alas. All companies, particularly public ones, exist to maximize their own benefit. Google never claimed otherwise; even in 2004 "Don't be evil" mostly clarified that the company wouldn't sprint to short-term gains.
Rather, our discomfort is an expression of the dissonance between ours and Google's understandings of evil. Google has managed to pass off the pragmatic pursuit of its own ends as if it were the general avoidance of wickedness. It has invested those ends with virtue, and it has publicized the fact that anything good for Google is also good for society. This is a brazen move, and it's no wonder it takes us by surprise.
The dissonance arises from our failure to understand "evil" as a colloquialism rather than a moral harm. An evil is just a thing that will cause you trouble later on—an engineering impediment. These practical evils are also private ones. Google doesn't make immoral choices because moral choices are just choices made by Google. This conclusion is already anticipated in the 2004 IPO document, which glosses evil as the failure to do "good things." At least we're used to hearing "good" as an ambiguous term that can refer to capacity and validity as much as—and more often than—virtue.
Products and infrastructures eventually degrade, but ideas linger. This verbal frame shift might turn out to be one of Google's lasting legacies. Google Evil, you might call it: evil as counter-pragmatism, and as an official public policy. As a replacement for a moral compass.
This is what makes the whole matter seem so insidious: It's not that Google has announced its intention not to be vicious and failed to meet the bar. Nor is Google, Arendt-style, just manning its station, doing what's expected. No, through its motto Google has effectively redefined evil as a matter of unserviceability in general, and unserviceability among corporatized information services in particular. As for virtue, it's a non-issue: Google's acts are by their very nature righteous, a consequence of Google having done them. The company doesn't need to exercise any moral judgement other than whatever it will have done. The biggest risk—the greatest evil—lies in failing to engineer an effective implementation of its own vision. Don't be evil is the Silicon Valley version of Be true to yourself. It is both tautology and narcissism.