Technology companies have long had a simple answer to anyone who did not like what was happening on, in, or through them: Services like Facebook, YouTube, and Twitter were platforms, which merely provided the tools for free expression, and not publishers or broadcasters responsible for the content they distributed. It was in that spirit that the head of policy at Facebook, Monika Bickert, defended leaving up a misleadingly altered video of House Speaker Nancy Pelosi. “We don’t have a policy that stipulates that the information you post on Facebook must be true,” Bickert said.
In the same vein, YouTube initially defended the YouTuber Steven Crowder’s ability to post videos taunting Carlos Maza, a Vox video producer who is gay, with homophobic slurs. “As an open platform, it’s crucial for us to allow everyone—from creators to journalists to late-night TV hosts—to express their opinions w/in the scope of our policies. Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site,” YouTube’s official account tweeted. “Even if a video remains on our site, it doesn’t mean we endorse/support that viewpoint.”
Facebook stuck to its guns, while YouTube eventually “demonetized” Crowder’s account. Both decisions were mocked and defended, crystallizing just how disputed the terrain of content moderation on platforms has become. The idea of “a platform” doesn’t make sense anymore, and it’s being challenged from every direction.
Beyond the Maza-Crowder dispute, just in the past week, researchers demonstrated that YouTube’s algorithm seemed to lead users to sexualized videos of children, and The New York Times ran a front-page story about how a young man was radicalized by right-wing videos on the site.
To defend themselves, the so-called platforms have developed byzantine sets of rules. If they follow the guidelines they make up, they say, they are fulfilling their obligations to their various kinds of users. This week, YouTube’s CEO, Susan Wojcicki, tried to explain her company’s actions at the Code Conference. She mentioned the word policies 14 times. “We need to have consistent policies,” she said. “They need to be enforced in a consistent way. We have thousands of reviewers across the globe. We need to make sure that we’re providing consistency.” Of course, the policies are always changing and can be revisited at any time, and yet these inconsistent rules will be enforced consistently. It’s a mess.
Even simple brand-promotion moments have become complicated. YouTube’s annual “Rewind” video, which the brand uses as a showcase for its stars, has become a battleground about what YouTube is. When last year’s version left out old-school YouTubers such as the controversial PewDiePie, it became the most disliked video in the site’s history. Fans who saw themselves as part of the “real” YouTube community panned YouTube for catering to more advertising-friendly, professional creators. “The community, which was once celebrated by YouTube, no longer feels included in the culture YouTube wants to promote,” The Verge summarized.
Many disputes—about “community,” white-supremacist content, online harassment, or the supposed liberal bias of these services—devolve into the carefully massaged language of some policy team. Perhaps the labor model of content moderation is questioned, or the accuracy or ethics of particular algorithmic decisions.
Don’t let the minutiae distract from what’s really happening: An era-defining way of thinking about the internet—“the platform”—has become unstable.
There was a time when there were no “platforms” as we now know them. That time was, oh, about 2007. For decades, computing (video games included) had had this term “platform.” As the 2000s began, Tim O’Reilly and John Battelle proposed “the web as a platform,” primarily focusing on the ability of different services to connect to one another.
The venture capitalist Marc Andreessen, then the CEO of the also-ran social network Ning, blasted anyone who wanted to extend the definition. “A ‘platform’ is a system that can be programmed and therefore customized by outside developers,” he wrote. “The key term in the definition of platform is ‘programmed.’ If you can program it, then it’s a platform. If you can’t, then it’s not.” My colleague Ian Bogost, who co-created an MIT book series called Platform Studies, agreed, as did most people in the technical community. Platforms were about being able to run code in someone else’s system.
This was Facebook’s original definition of its product, Facebook Platform, which allowed outside developers to build widgets and games, and extend the core service. In the years before 2016, nearly all of Mark Zuckerberg’s public references to Facebook as a platform were technical, about connecting with developers. But every once in a while, he slipped in a more colloquial usage of the term. As far back as a 2008 interview with Sarah Lacy at SXSW, Zuckerberg said, “We think that we might have a chance here to build a platform that fundamentally changes the way that people can connect and communicate.” Later, after the company went public, Facebook executives primarily referred to their advertising platform. “We’re building,” Facebook’s COO, Sheryl Sandberg, held, “the world’s first ad platform that delivers personalized marketing at scale.”
If the concept of a platform sounds confused, that’s actually the power of the metaphor. In a brilliant, prescient 2010 paper, a Cornell University and Microsoft communications researcher, Tarleton Gillespie, tore open the emerging rhetoric of the platform, showing how useful and slippery this new invention could be. Platform could mean one thing to advertisers, another to professional content creators, and yet another to everyday users.
This evolution of the word platform drew both on the technical origin of the phrase, Gillespie argued, and on its deeper meanings, as seen in “political platform” or the architectural idea of a literal platform to stand on. “‘Platforms’ are ‘platforms’ not necessarily because they allow code to be written or run,” he wrote, “but because they afford an opportunity to communicate, interact or sell.”
Despite what more technical types thought, this is the definition that came to dominate in subsequent years. A platform was where you could be heard and get yours.
And there was something new to these all-encompassing internet companies. They had unprecedented scale and had grown like no other business in the world. This is just part of how they had to work, Nick Srnicek argues in his book Platform Capitalism. Platforms had a tendency to monopolize certain activities, benefiting from the network effects generated by large user bases. These users then generated data that could be used to make money and continue scaling the service.
This new rhetorical device wasn’t just for press releases, but also for ginning up business and creating a legal architecture. Advertisers were used to buying slots in tightly controlled video content—sitcom TV, morning shows—while YouTube was offering something much more motley with few of the formal or informal restrictions that broadcasters face. Facebook had to get advertisers ready for the idea that their ads would run next to Confederate-flag memes and FarmVille posts and divorce announcements. This was not an easy task, but these companies turned the risky nature of openness into a strength. It was downright virtuous to support this wild world of content. “Unlike Hollywood and the television networks, who could be painted as the big bad industries,” Gillespie noted, “online content seems an open world, where anyone can post, anything can be said.”
This was not true, of course. Some things could not be said. Some types of content were favored by advertisers and companies. The algorithms they use to sort and promote content have biases. But the platform claims looked reasonable if you squinted, especially since this was all new and people hadn’t yet figured out how to think about these massively successful enterprises that consumers seemed to like using.
Platforms might have been something new, but they sure did a lot of things that previous information intermediaries had. “Their choices about what can appear, how it is organized, how it is monetized, what can be removed and why, and what the technical architecture allows and prohibits, are all real and substantive interventions into the contours of public discourse,” Gillespie wrote.
Yet for years the internet platforms mostly denied that they were much of an intervention at all. When Senator Joe Lieberman tried to get YouTube to take down what he characterized as Islamist training videos in 2008, the YouTube team responded with free-speech bromides. “YouTube encourages free speech and defends everyone’s right to express unpopular points of view,” they wrote. “We believe that YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views, and rather than stifle debate we allow our users to view all acceptable content and make up their own minds.”
Facebook drew on that sense of being “just a platform” after conservatives challenged what they saw as the company’s liberal bias in mid-2016. Zuckerberg began to use—at least in public—the line that Facebook was “a platform for all ideas.”
But that prompted many people to ask: What about awful, hateful ideas? Why, exactly, should Facebook host them, algorithmically serve them up, or lead users to groups filled with them?
These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We’re platforms! But activists and legislators are now saying, So what? “I think they have proven—by not taking down something they know is false—that they were willing enablers of the Russian interference in our election,” Nancy Pelosi said after the altered-video fracas.
Given how powerful and flexible as the rhetoric has been, the idea of the platform will not simply exit stage right. “The platform” once perfumed the naive, meretricious, or odious actions that allowed these companies to expand. But as the term rots, it has begun to stink, and anybody who catches a whiff of it might notice what had been masked. These companies are out to grow their businesses, and every other thing is a means to that end.
This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.