Now forget Trump and consider the new media landscape. Surely YouTube, Facebook, and Twitter all illustrate the awful things that some individuals will do in civic discourse when no gatekeepers intervene?
There is some truth to that. Digital communications, no matter how conceived, cannot free themselves of people behaving badly anymore than an actual public square.
Linker believes that excessive or even utopian reverence for a “marketplace of ideas” is “based on a false (or at least overly simplified) notion of human nature and society, presuming that people are roughly equal in their cognitive and emotional capacities, and that the surrounding culture will automatically inculcate norms and habits that facilitate the acquisition of knowledge and the capacity to judge wisely.”
That individuals differ in their cognitive and emotional capacities is beyond dispute, as is the need for acculturation to make civic life function smoothly. And perhaps a frictionless marketplace of ideas really would be much worse than what its advocates imagine if it ever came to pass.
Still, I’d argue that the worst aspects of YouTube, Facebook, and Twitter owe more to choices made by their architects than insufficient paternalism. The platforms give the illusion of content-neutrality beyond some basic rules. Yet what they show us is not a simple stream of what is uploaded in reverse chronological order, filtered as each user prefers. Were that so, it would be easy enough to block bad actors as individuals. Instead, these advertising-driven corporations are engaged in a constant, high-stakes competition for our attention—and so they have designed platforms that manipulate what users are shown to increase “engagement,” not enjoyment or edification or empathy or civic knowledge.
On YouTube, those tweaks lead some users to ever more extreme content. On Facebook and Twitter, the results are complicated and usefully explored in a recent exchange between Ezra Klein and Jaron Lanier. They were discussing the latter’s book, Ten Arguments for Deleting Your Social Media. Klein resisted his interlocutor’s thesis, arguing that social media has allowed traditionally marginalized groups “to be heard” and “to influence conversations in ways that were very difficult before,” benefiting movements like Black Lives Matter by helping them to take off.
Lanier replied:
I worked very hard to try to make the internet possible on a technical level in the ‘90s. I am still a believer that bringing people together is valuable and can create wonders. If I lost that faith I don’t know what I’d do.
So I agree with that positivity. And my sense of what’s going on now is, that positive layer does exist––but it's joined by an unnecessary, deeply unfortunate, even unsurvivable other thing: a machinery in the background that takes advantage of it and ruins it. So in the example of Black Lives Matter, I thought the Black Lives Matter movement was brilliant. I thought the framing of it was great, I thought it was remarkably generous and open considering the intensity. It was inviting, it was positive. I thought it was an amazing thing. And I think the general initial reaction to it was positive for most people, actually. And it’s absolutely true that it was accelerated by social media.
However, there’s this other behind-the-scenes machine that is working, and what that’s doing is taking all the posts and all the activities from people who like Black Lives Matter, and just as a matter of course, algorithmically testing it to see who else it might engage.
Of course, the way engagement is measured is with very rapid feedback, so the people who have the more impulsive instead of the more considered reactions tend to read more clearly to the feedback algorithms. And as it happens, the people who are irritated or disagreed with it were the most engaged. And that is what always happens.
So these people that hated Black Lives Matter were not only identified by the algorithm but introduced to each other. And their annoyance was reinforced, and reinforced, and reinforced, not out of any ideological bent on the part of a company like Facebook, but rather just through the algorithmic seeking of engagement. Then they became like a red carpet rolled out for bad actors, which in this case were Putin's psychological warfare units, who suddenly had this population to target very clearly, as well as the original Black Lives Matter people.
So it’s this behind-the-scenes behavior modification and manipulation scheme that's been glommed onto the good stuff that ruins it.
What you see as a repeated pattern are people that I find to be doing things that are very positive and attractive and worthwhile––but then their energies get inverted by this machine in the background into something that’s the opposite, something horrible and destructive of society.
In that telling, elites at Facebook and Twitter may be improving on an unconstrained marketplace of ideas when they ban someone from posting death threats or revenge porn, sure, but they’re also using their power as gatekeepers in ways that distort and degrade the marketplace of ideas.