Four years ago this July, The Atlantic marshaled neurological evidence to show how Google is making us stupid. Turns out, the Supreme Court wasn't spared either. That's the contention at least of an intriguing article in the Virginia Law Review by William & Mary law professor Allison Orr Larsen, who says the tendency of Supreme Court justices to search Google for facts to support their opinions is dumbing down the high court's deliberative process.
The counterintuitive thesis relies on an understanding of how legal cases are made in the high court, more specifically, the Supreme Court's "adversary system." The Boston Globe's Josh Rothman explains Lasren's point:
All legal cases ... rest to some degree on facts, and, traditionally, the courts have relied upon what's called the "adversary system" to deal with them. Either side can introduce factual evidence into argument; if the other side thinks the facts are wrong, they can dispute them in court. Judges try to work with facts which have been vetted by both sides.
But now this traditional process, which sounds slow and cumbersome, has been turned on its head with the growing tendency of Supreme Court justices Googling facts that support their views, which they then introduce to the court. This is troubling, says Larsen, for a couple of reasons. First, it allows justices to introduce source material that isn't vetted through the adversary system, which has resulted in justices relying on bogus, inaccurate information. Larsen writes:
The Justices were prone to rely on stories found in newspapers or magazines of general circulation. These include nationally circulated periodicals like the New York Times, and theWall Street Journal, but also stories from the Sacramento Bee, the Arkansas Gazette, and the Tampa Tribune, to name a few. The justices also independently found and relied on articles in magazines with more niche audiences: like, for example, Musicweek, Digital Entertainment, Mediaweek, Sporting News, and Golf Magazine....
Now those sources don't sound so terrible to us but Larsen found in several instances that inaccurate information was derived from the sources and introduced into the court room. Those facts, Larsen writes, "do not get vetted by the litigants" so they inevitably meet a lower bar of scrutiny. The second reason this new phenomenon is troubling is because of the inherent problems with Googling something and the way Google filters our results to cater to our biases:
Liberal and conservative justices, like anyone else, tend to engage in "motivated reasoning," and will seek out information from sources which they know, in advance, will agree with them. Moreover, even the most objective justices can be biased unintentionally, since bias of some kind is the inevitable result of using search websites, which shape the results you find according to your preferences and tastes. Because of the way Google works, Larsen warns, searches "could produce different results for different chambers depending on, for example, the internet history (or Facebook profile!) of the users."
So is it a convincing thesis? Unlike Nicholas Carr's 2008 essay on Google, Larsen doesn't delve into the issue of how Google affects our attention span. But it fits in the same genre of scare stories that acknowledge the unmistakable effect Google has had on our behavior and associates a serious societal problem with it. Larsen's solution is to either prohibit justices from introducing "in-house" research altogether or delegate the job to an independent body (like the way Congress has the Congressional Research Service dig up facts for it). Though it makes sense to have all the facts subject to the same scrutiny, the excuse that Google adds of a layer of bias by filtering the individual's search results is a bit thin. Yes, as search expert Danny Sullivan at Search Engine Land points out, Google results are personalized:
Google has had personalized results since June 2005, results from across the web that are given a ranking boost because they are deemed especially of interest to someone, based on their personal behavior and interests. Without the boost, these results might not have made it into the top listings for a particular search.
But in our experience, the personalized results have never hidden facts we're looking for or turned out results in an ideologically-predictable way. And that seems to be the conclusion of people who've looked thoroughly into the issue as well, as Slate's Jacob Weisberg found out last year. Weisberg ran his own test, giving a range search queries to an ideologically diverse group and comparing the results:
There were only minor discrepancies in the screen shots they sent back for these queries... None of the minor variations aligned in any apparent way with anyone's political views ...
Google's response to this, when I asked for comment, was a statement about the need to balance personal relevance and diversity. "We actually have algorithms in place designed specifically to limit personalization and promote variety in the results page," a spokesman emailed me. Independent analysts aren't seeing a problem, either. Jonathan Zittrain, a professor of law and computer science at Harvard, who studies Web censorship, agrees that Google isn't doing what [critics say] it is. "In my experience, the effects of search personalization have been light," he told me.
The bottom line? Don't blame Google for the dumbing down of the Supreme Court. It's not the ideological reinforcer people think it is. But if justices are increasingly introducing information that isn't vetted before all relevant parties, yes, by all means, adjust the system.
This article is from the archive of our partner The Wire.