
Last week, the CEO of Twitter took to his own platform to make an unusual announcement. “We aren’t proud of how people have taken advantage of our service,” Jack Dorsey wrote. Twitter is well-aware of the complaints users have about it—“troll armies” and “misinformation campaigns,” among other worries—and is going to attempt to reckon with the negative side-effects of the platform it has created. Twitter’s moment of self-reflection follows recent acknowledgements by Facebook that its influence on democracy hasn’t always been positive.
Dorsey says Twitter is going to try to do better, in part by investigating potential metrics for the health of online conversations. But we wanted to explore other ways the big platforms need to adjust. In today’s issue, we’ll look at that question from a few different angles: a deceptively simple proposal from Alexis Madrigal, a conversation with a tech ethicist, and an assessment of The Atlantic’s track record on tech predictions.
Viral Content Is Not Necessarily the Best Content
Twitter may not know how to measure the online health of its conversations, but, Alexis Madrigal writes in a story out today, there’s something very simple the company can do to make its platform less toxic. Alexis conducted an experiment that removed one-click shares of other’s stories from the tweets he saw. “When they disappeared, my feed had less punch-the-button outrage.” His conclusion: Twitter should remove the “retweet” button.
Tech companies have designed their interfaces to maximize the spread of information, to amplify faster … They could peel away those layers—increase the friction of posting, make it harder to amplify information with a single click, redesign user interfaces to encourage thoughtfulness. These things wouldn’t make the neo-Nazis go back into hiding or end vicious political dog piles, but my modest experiment has convinced me that a better social-media atmosphere could emerge, one that centers less on stoking outrage and more on … everything else.
The Ethics Conversations Big Tech Needs to Have
“‘Ethics’ is not the first word that comes to mind when most people think of Silicon Valley or the tech industry,” wrote Irina Raicu last year. That’s something she wants to change. Raicu, the director of the Internet Ethics Program at Santa Clara University, has advocated for more robust ethics training in Silicon Valley. Abdallah Fayyad spoke with her about what’s missing in Silicon Valley’s ethical thinking. Here’s a condensed transcript of their conversation.
Abdallah Fayyad: Do big tech companies need to develop a code of ethics?
Irina Raicu: I actually think we need more than that. Codes of ethics are sort of broad statements of principle, but the bottom line is how you apply these codes in everyday decisions. It would involve people having to identify ethical dilemmas that require more ethical analyses and being able to reason through them. That’s why there are now calls for more trainings and classes in computer science ethics and data science ethics and so on.
Abdallah: What would happen in those ethical trainings?
Raicu: What I think works best is using case studies and looking at situations where dilemmas arose in the past. The most effective thing is to ask the technologists themselves to think about those things from an ethical perspective, rather than through the legal compliance. What might this lead to? How does this impact people? Is it fair to everybody involved? Does it respect people’s rights? Does it impact the common good in some way? These are the kinds of questions posed by an ethical framework. A first step would be to have the top decision makers go through that kind of reasoning.
Abdallah: Jack Dorsey, the CEO of Twitter, recently announced that the company will be taking steps to figure out how they can hold themselves accountable for their role in our public discourse. What did you make of that?
Raicu: What seems to be new about that announcement is an admission of a problem that’s not immediately followed by the company saying that it knows the solution and will implement it. My colleague, Shannon Vallor, a philosophy professor here at Santa Clara, has written a book about the virtues that people need as we move into an increasingly technological society. She sees humility as a key virtue. You don’t usually hear that in the comments coming from the top, but you are hearing that more lately.
What struck me about [Dorsey’s announcement] is how quickly he jumped to metrics. Ethics training tells you that there are things that can’t be measured. If you look through a variety of ethical lenses and evaluate your decision in a qualitative way, you’re going to come up with a better decision than if you try to find a quantitative or technical solution to this.
Abdallah: If social media companies had better ethical training, could they have stopped Russia from using these platforms to interfere with the 2016 election?
Raicu: If they’d had some more thoughtful planning ahead of time, I think so. I’m not saying that ethical training means people don’t make bad decisions. Sometimes they’re trying to make an ethical decision and we may just not agree on what that decision is. I can make strong ethical arguments for freedom of speech, but I can also make strong ethical arguments for protecting the rights of people on the receiving end of hate speech.
When Facebook, for example, was accused of having a liberal bias, they decided to do even less curating of the news than they had done until that point, and they got rid of the human curators and turned completely to algorithms. That was a reactive decision and it was not done on the basis of thinking about what would be a more ethical outcome. I think that forethought and more planning could have avoided those things.
Did We Miss the Warning Signs on Big Tech?
Karen Yuan revisited The Atlantic’s early stories on tech developments that are roiling our present-day politics. She asked four writers how their stories have stood the test of time.
-
In 2007, it was clear Facebook was going to be a phenomenon, but what kind of phenomenon? Michael Hirschorn saw the growth potential of Mark Zuckerberg’s creation: “Facebook could become a transformational brand, altering the Webisphere around it rather than simply being a site du jour.” Facebook had the potential, he wrote, to become a space where “the majority of us” could project our identities. He glancingly noted that Facebook’s collection of user information could be dangerous—back then, Google was the behemoth that threatened privacy.
-
Hirschorn’s 2018 update: “I think I got Facebook basically right. We all needed an address in second life and Facebook provided a manicured space in the chaos of the early World Wide Web. What I didn't see, obviously, was that it would potentially crater western democracy … In 2007, the darker implications of social media were still being Vaselined over by a lot of Silicon Valley techno-utopianism.”
-
-
In 2008, even as Barack Obama's political operation conquered the internet, a rightward shift was building in online media. In October 2008, Reihan Salam wrote, “The inventive spirit behind a new spate of American innovations, from Google to YouTube to Facebook, is almost exclusively associated with the liberal left … Internet politics and liberal politics increasingly look like one and the same.” But, he added, the seeds were planted for an uprising from web-savvy conservative activists. As for what would kickstart that uprising—“to find out,” he wrote, “we may have to wait for an Obama administration.”
-
Salam’s 2018 update: “In the past, it was harder to know exactly what people you admire and identify with believed about a given controversy, and so adopting their views as your own wasn't quite as convenient. Today, in contrast, it's much easier to converge on the tastes and preferences of our idols, and to ostracize dissenters … I'm struck by the way social media can facilitate ideological coordination.”
-
-
In 2011, political bots were a theory waiting to be put into action. Andy Isaacson explained that bots could amplify the lobbying tactic of “astroturfing,” or the camouflaging of campaigns as grassroots efforts in order to manipulate people’s views about a cause. Bots could enable astroturfing on a massive scale, aided by “the details that people reveal about their lives, in freely searchable tweets and blogs, [that] offer bots a trove of personal information to work with.”
-
Isaacson’s 2018 update: “Bots have become a fact of social media life in recent years. We're seeing bots used as weapons in coordinated campaigns of misinformation and manipulation. In recent years, swarms of bots have been used to disrupt dissident activists in Mexico, Syria, Turkey, and other places, and dedicated Russian psy ops and cyberattacks, as we also know, played a role in the 2016 U.S. election. We could see hostile actors using bots to sow chaos—identifying people open to radicalization, for example.”
-
-
In 2012, Facebook’s threat to social relationships was coming into view. Stephen Marche wrote, “The real danger with Facebook is not that it allows us to isolate ourselves, but that, by mixing our appetite for isolation with our vanity, it threatens to alter the very nature of solitude.” The self-image that is the basis of a Facebook user's online personality isn't genuine, and neither are the connections users are making with each other's self-images.
-
Marche’s 2018 update: “What I didn't predict was the way social media would meld with the power of celebrity to become the dominant political and social force in the world. I don't think we have, collectively, a good grasp on that connection still. We just can't bring ourselves to take it seriously.”
-
Today’s Wrap Up
-
Question of the day: How would you fix the big tech companies? Write back and let us know.
-
Your feedback: What did you think of this email? Tell us here.
- What’s coming: In our next email, we're writing about romance scams and the alarming number of victims who don't come forward after fraud.
We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.