How a 14-Minute Video Can Trigger Violence Abroad

A perceived cozy relationship between the U.S. government and Internet companies doesn't help.

One of the more perplexing questions that still remains on the protests gripping the Arab world this week is how, exactly, some shoddy, contrived, melodramatic footage disdainful of Muhammad came to stand as proxy for U.S. opinion on Islam writ large.

Whether or not this Innocence of Muslims is simply a pawn, as is seeming quite possible, there's a striking theme that's coming out of the high-level debate over its existence: whether something could have, and should have, been done in the U.S. to get rid of it at some point as it was made, translated into Arabic, and distributed to the world via YouTube and other means.

Start with the Muslim Brotherhood, Egypt's ruling party. In a statement linked to from their Facebook page, the Brotherhood argues that films like the one at issue here "will continue to cause devout Muslims across the world to suspect and even loathe the West, especially the USA, for allowing their citizens to violate the sanctity of what they hold dear and holy." It goes on to contend, with great conviction, that "preparations for this abuse took place plainly, right under the noses of authorities in those countries -- over several months." Two phrases in particular jump out: allowing their citizens and under the noses of authorities. In other words, the powers-that-be in the United States let this happen, and they were in a position to know of it all along.

That's an understanding that Hillary Clinton just can't let stand. She's been trying this week to reframe the debate, such as it is. The film is "disgusting and reprehensible," said a somber Secretary of State in Washington yesterday. "Let me state clearly, and I hope that it's obvious, that the United States government had absolutely nothing to do with this video." It's difficult for people who operate in different speech environments to understand, she has also said, why objectionable content simply can't be disappeared. Never mind First Amendment protections -- and never mind the queasy feeling that such a disappearance might provoke for Americans. "In today's world," said Clinton, "with today's technologies, that is impossible."

But this gets us into some messy business. If the Muhammad video is a symbolic offense, there are gatekeepers who have the power to make it symbolically go away. Google responded this week by restricting access to the offending footage in Libya and Egypt, using a sort of geo-restriction regime that Twitter adopted in January for limiting the effects of global legal objections to specific tweets. Google, which owns YouTube, acted despite the fact that it simultaneously asserted that the footage fell within its Community Guidelines. Those rules are imbued with a sort of Silicon Valley-ish sense of enlightenment: "We're not asking for the kind of respect reserved for nuns, the elderly, and brain surgeons. We mean don't abuse the site." But times are difficult in Libya and Egypt, said Google. Here, Community Guidelines are not enough. It's too risky, the thinking seemed to go, to allow the video to exist on YouTube in places that are already inflamed.

Google's choice taps into an on-going debate in the U.S. over who governs online spaces, and how they govern them. The U.S. is unique in our speech laws, no doubt. That's well known. But less well known is that we're also unique in our laws that rule who's responsible for what online. One of the most forward-thinking decisions made by U.S. lawmakers in the early days of the Internet was not to require online publishers to police the platforms they offer up to the world. If YouTube had to vet every video that was uploaded to it, YouTube would likely be an impossibility. That approach helped the United States become the inarguable geographic center of the Internet. That approach has been the crux of the participatory web. That approach has allowed Google, YouTube, Twitter, Facebook, blogs, Instragram, you name it, to flourish.

But in the last few years, the execution of that principle has gotten pretty complicated. A couple of years back, you might remember, Sarah Palin was the victim of a Tumblr swarm that managed to trigger Facebook's automatic community deletion mechanism on a post she'd written about the possibility of a Muslim Community Center near the World Trade Center site. Conservative radio guy Alex Jones has raised the alarm over the blocking of his anti-Obama movies on YouTube. Then there's the case of Dove World Outreach Center, the Florida 'church' whose pastor, Terry Jones, has been implicated in the distribution of the Mohammed footage. Two Septembers ago, the Texas company Rackspace stopped serving up Dove World's website, saying that it was under no obligation to host hate.

Hosting companies are a dime a dozen. But Google runs much of the architecture that the modern web is built on. It gives the company enormous power. The idea that Google (or YouTube) would chose to block people from seeing a video that doesn't break the site's terms of service? That's upsetting to some watchers. "The expectation has been that as long as your content has not violated the law or the terms of service, you can put it up and it stays up," says Eva Galperin, the Electronic Frontier Foundation's International Freedom of Expression Coordinator, on a call. "It's what has allowed them to be such a strong platform for free expression."

Of course, let it be said that it's those who channel offense taken into violence, deadly violence, whose hands are dirty here. Full stop. But it's a short leap from YouTube's blocking of one offensive video to the idea that YouTube is giving its imprimatur to videos that do run on the site. In its statement, the Muslim Brotherhood also held that it certainly can't condone bloodshed. But it also can't help but notice that "these countries never made a move regarding the abuse until the strong reaction seen across the Muslim world." Make such a move, Google did. Such selective blocking, says EFF's Galperin, "is a slippery slope. If they do this once, they'll be expected to do it again in the future."

Worth mentioning is another, related digital debate involved in this swirl. Where does the Internet's architecture end and the U.S. government begin? The distinctions have been called into question -- inadvertently, but dangerously. Amazon, you'll remember, responded swiftly when Senator Joe Lieberman suggested that perhaps Wikileaks shouldn't enjoy the benefits of the company's cloud hosting. One of the most salient concerns during last winter's debate over SOPA and PIPA was that some in Congress seemed to be willing to militarize the Internet's domain name system, one of the global network's most critical components. Meanwhile, Secretary Clinton and her State Department have embraced both an "Internet Freedom" agenda and a "21st Century Statecraft" approach that have involved a certain closeness with major U.S. tech companies. Companies like Google, Facebook, and (to a lesser extent) Twitter, are political players now. They sponsor political events, take part in policymaking, host White House productions, and more. There are those who see, in all that interaction, the U.S. government and U.S.-based Internet companies forming one big, networked, mutually validating community. The Internet doesn't quite seem the place apart it once did.

As always, those looking to the Internet for reasons to be offended are going to find them. It's not as if a more informed global conversation about the nuances of Section 230 of the U.S. Communications Decency Act is what's missing here. But there's plenty of fodder for rhetoric, and rhetoric is often plenty enough to get people riled up. That seems especially true when it comes to the online space. On the Internet, the old joke goes, no one knows you're a dog. On the Internet, it's also true, no one seems to know or care that you're just some obscure California filmmaker with terrible production values, a YouTube account, and a thing against Islam.