Acouple of months ago, I made a small tweak to my Twitter account that has changed my experience of the platform. It’s calmer. It’s slower. It’s less repetitive, and a little less filled with outrage. All of these improvements came about because I no longer see retweets.

When I joined Twitter, in late 2007, it was still a new medium—and a fun one. I felt as though we early users were discovering its potential, and creating its shared language. At its best, Twitter could feel like your “dream community,” as the writer Kathryn Schulz put it, filled with interesting people who were interested in the same things you were.

The retweet began as a user convention. People would write “Retweet” (or “RT”) and paste in another person’s post. This was cumbersome, but it also meant those words would go out next to your name and photograph. People were selective about what they chose to retweet. When Twitter introduced a retweet button, in 2009, suddenly one click could send a post careening through the network. The automatic retweet took Twitter’s natural tendency for amplification and cranked it up.

Somewhere along the line, the whole system started to go haywire. Twitter began to feel frenetic, unhinged, and—all too often—angry. Some people quit. Others, like Schulz, cut way back. I felt the same urge, but I wanted to do something less extreme, something that would allow me to keep the baby, even as I drained the bathwater. So I began to take note each time I experienced a little hit of outrage or condescension or envy during a Twitter session. What I found was that nearly every time I felt one of these negative emotions, it was triggered by a retweet.

Twitter has a tool that lets you turn off retweets from one person at a time. But I follow thousands of people, so my office mate, who happens to be a skilled programmer, wrote a script for me that turned off retweets from everybody. Retweets make up more than a quarter of all tweets. When they disappeared, my feed had less punch-the-button outrage. Fewer mean screenshots of somebody saying precisely the wrong thing. Less repetition of big, big news. Fewer memes I’d already seen a hundred times. Less breathlessness. And more of what the people I follow were actually thinking about, reading, and doing. It’s still not perfect, but it’s much better.

This experiment got me thinking about how social platforms work. Both Twitter and Facebook rely on an ethos of sharing; both use complex algorithms to organize and rank the content you see, basing the rankings in part on how many people (in your social circles or who are similar to you) have “liked,” shared, or otherwise interacted with a post. The more people “like” something, the farther it travels.

BuzzFeed’s Jonah Peretti famously used an equation from epidemiology to illustrate the “reproduction rate” of a post: R = ßz, where z is how many people initially see the post, and ß is how likely those people are to share it—how “viral” the post is. For a story to be seen by many people, it needs a decent-size z—the seed audience—but ß is where most social-media companies have concentrated. As they’ve created algorithms to show people what they’re most likely to engage with, Twitter and Facebook have ended up promoting lots of high-ß posts.

Every layer of the digital-media economy has been reconfigured to produce shareable stuff. Writers and video makers know they need ß. Just about every high-profile, born-in-the-21st-century media company is optimized to generate viral content. Popular social-media users gain followers because they’re sensitive to the mysterious element of ß.

Over time, this emphasis on shareability has created an enormous change in what all of us, not just social-media users, see on our screens. TV-news producers look to create viral segments. Benchwarmers on NBA teams plot shareable celebrations. Companies spend millions of dollars on advertisements they hope are weird enough to go viral. To take one telling example: The senior vice president of communications at Arby’s personally brought a bag of sandwiches (and a borrowed puppy) to the creator of the spoof Twitter account @nihilist_arbys. Why? The account is high-ß. The story of the meeting was then written up by Business Insider—and that was highly shareable, too, garnering half a million page views.

But what if viral content isn’t the best content? Two Wharton professors have found that anger tops the list of shareable emotions in the social-media world, and a study of the Chinese internet service Weibo found that rage spreads faster than joy, sadness, and disgust. In general, emotional appeals work well, as everyone in media has come to discover. Fundamentally small stories that have no lasting import can dominate Twitter for days: a doctor being dragged off an airplane, the killing of Harambe the gorilla, something Lena Dunham said.

Twitter can destroy your perspective. “Every outrage was becoming the exact same size,” Mike Monteiro, a prominent web designer, wrote in a Medium post about quitting Twitter. “Whether it was a US president declaring war on a foreign nation, or an actor not wearing the proper shade of a designated color to an awards ceremony. On Twitter those problems become exactly the same size.”

This is disorienting, and the strangeness of the discourse can be exploited—as it has been by Russian agents and white supremacists. In that sense, the 2016 U.S. presidential election was merely a symptom of the deeper problems with social media. The neo-Nazis, too, are less aberrations than opportunists. (Race-baiting is viral gold!) Their provocations will be retweeted. The takedowns of their provocations will be retweeted. Everyone wins, if winning is being angry.

The social-media giants know the systems they created aren’t working very well, at least if the aim is social good. Facebook has acknowledged that its product can be bad for the well-being of people who use it to passively consume information, and a former Facebook executive recently told an audience at Stanford that he felt guilty for helping to create tools that are “ripping apart the social fabric.” Twitter executives have surely noticed how many people revel in self-loathing over their use of the service. Many Silicon Valley executives shield their own children from their creations.

But social-media platforms don’t have to be organized around shareability. Instagram, for instance, doesn’t allow links, except a single one in each user’s profile. This dampens self-promotion and slows down the spread of information from the rest of the internet on the platform. It doesn’t have native reposting tools, either. And it is, by pretty much all accounts, a nicer place to spend time online.

Or take Snapchat, the other social network that has fared well in recent years. The dominant modes of interaction on the service have little to do with virality. People use it to send messages or create posts (“stories”) that disappear. The private or semiprivate and ephemeral nature of most Snaps reduces their shareability. Yet the youngest social-media users have flocked to Snapchat and Instagram. Perhaps this is the way of the future.

In recent discussions of its News Feed software, Facebook announced that it would show you much more of what your friends and family share, rather than merely the most-viral posts. This is a huge change and the most significant sign yet that Silicon Valley has realized that high-ß content has begun to corrode the pipes through which it runs.

Tech companies have designed their interfaces to maximize the spread of information, to amplify faster, to increase the ß in the network. They could peel away those layers—increase the friction of posting, make it harder to amplify information with a single click, redesign user interfaces to encourage thoughtfulness. These things wouldn’t make the neo-Nazis go back into hiding or end vicious political dog piles, but my modest experiment has convinced me that a better social-media atmosphere could emerge, one that centers less on stoking outrage and more on … everything else.

This article appears in the April 2018 print edition with the headline “The Case Against Retweets.”