Jay Stanley, a privacy expert at the American Civil Liberties Union, sees danger in steps toward censorship on social media. “We would ideally like to see companies that provide a forum in which people communicate with each other to be free-speech zones, especially companies that play important roles in our national discourse,” he said. “Once companies go down the path of engaging in censorship, line-drawing decisions are often impossible, inconsistent, capricious, or downright silly.”
But Andrew McLaughlin, the cofounder of Higher Ground Labs, a company that invests in technology to help progressive candidates, believes that platforms should suppress propaganda in ad space. “Despite their best intentions, tech companies have built systems that are so open to manipulation by bots and trolls and other techniques that they effectively reward propaganda,” he says. “Failing to tackle that problem means ceding the terrain to fraudsters, fake-news pushers, and other kinds of propagandists, who easily gain the upper hand.”
Susan Benesch, a faculty associate at Harvard’s Berkman Klein Center for Internet and Society and the founding director of the Dangerous Speech Project, likewise falls in this camp. “If you deceive people consistently and on a large scale, you are probably damaging their willingness to engage as citizens in our democracy,” she says. She believes that the public should continue to pressure tech companies to create some mechanism for oversight as to what content is taken offline.
Facebook’s ad policy already prohibits some forms of messaging, such as use of politically or socially controversial material for commercial benefits. And on Wednesday, Facebook announced new guidelines for monetized content—including new steps to verify the authenticity of buyers, which could deter trolls and bots. One of its provisions is a warning against making money off some forms of deception: Users who “share clickbait or sensationalism, or post misinformation and false news, may be ineligible or may lose their eligibility to monetize.”
As far as non-ad content, tech companies censor certain disagreeable speech as well. Facebook, Twitter, and YouTube have removed ISIS-linked propaganda and accounts from their platforms. Following The Daily Stormer’s inflammatory coverage of Charlottesville, Google and the web-hosting company GoDaddy refused to provide service to the neo-Nazi website. Meanwhile, the internet company Cloudflare revoked the site’s DDoS-attack protections. And the chat app Discord banned other alt-right groups.
Eric Goldman, a codirector of Santa Clara University’s High-Tech Law Institute, sees these cases as “inextricably linked” to the recent controversy over Facebook advertisements. “On the one hand, I’m excited when I see social-media companies and other online services being thoughtful about what kind of content they want on their services,” he said. “On the other hand, whenever we see online services tinkering with political ads, we have the risk they might be adding their own biases into the mix.”