This week, more than a dozen high-profile social scientists and legal scholars charged their profession to help fix democracy by studying the crisis of fake news.

Their call to action, published in Science, was notable for listing all that researchers still do not know about the phenomenon. How common is fake news, how does it work, and what can online platforms do to defang it? “There are surprisingly few scientific answers to these basic questions,” the authors write.

But just as notable as their admission was the language used to make it. I was surprised to find this group of scholars using the term fake news at all—even though they were calling for research into fake news.

That may sound odd. How can you study something and not call it by its name? Yet over the past year, academics and tech companies have increasingly shied away from the phrase. Facebook has pushed an alternative term, false news. And some scholars have worried that by using the term, they amplify President Trump’s penchant for calling all negative media coverage of himself “fake.”

The authors of the Science essay—who include Cass Sunstein, a Harvard Law School professor and former Obama administration official, and Duncan Watts, a social scientist at Microsoft Research—argue that avoiding the term distorts the issue. Fake news refers to a distinct phenomenon with a specific name, they say, and we should just use that name (fake news) to talk about that problem (fake news).

“We can’t shy away from phrases because they’ve been somehow weaponized. We have to stick to our guns and say there is a real phenomenon here,” said David Lazer, one of the authors of the essay and a professor of political science and computer science at Northeastern University.

“We think it’s a phrase that should sometimes be used,” he told me. “We define it in a very particular way. It’s content that is being put out there that has all the dressings of something that looks legitimate. It’s not just something that is false—it’s something that is manufactured to hide the fact that it is false.”

For instance, the infamous hoax report that Pope Francis had endorsed Donald Trump’s presidential candidacy was hosted on a website that had the appearance of being a local TV station, “WTOE 5 News.” There is no station called WTOE 5 in the United States, but the plausibility of the name allowed the falsehood to spread. (That one fake story had roughly three times more Facebook engagement—that is, likes, shares, and comments—than any New York Times story published in 2016.)

Facebook now almost exclusively uses the term false news to talk about fake news. First Draft, a nonprofit research group within Harvard University, also prefers false news, arguing that fake news fails to capture the scope of the misinformation problem online. (Claire Wardle, First Draft’s director of research, goes so far as to call it “f-asterisk-asterisk-asterisk news.”)

But Lazer rejected this phrase as imprecise. Not all false news, he said, is fake.

“I’m sure The Atlantic has sometimes gotten things wrong and published incorrect reporting,” he told me. “Those reports may be false, but I wouldn’t call them fake. For fake news, the incorrect nature of it is a feature, not a bug. Whereas when The Atlantic publishes something that’s incorrect, it’s a bug.”

“The term fake news, describing this problem, has been around for a long time,” he added. “There’s a wonderful Harper’s article about the role of fake news and how information technology is rapidly spreading fake news around the world. It used that term, and it was published in 1925.”

None of the political scientists endorsed President Trump’s tack of calling almost any news coverage he dislikes fake news. “We see that usage getting picked up by authoritarian types around the world,” Lazer said. But he does hope that by using the eye-grabbing term, scholars can reinforce the idea that there is something wrong with the information ecosystem, even though “it may not be the pathology that Donald Trump wants you to believe in.”

Just saying fake news won’t make the pathology go away, though. Nor is fake news the only internet’s only truth affliction.

“I think there’s a whole menagerie of animals in the false-information zoo,” Lazer told me. They include rumors, hoaxes, outright lies, and disinformation from foreign governments or hostile entities. “It’s clearly the case that there was a coordinated Russian campaign around disinformation, but that’s another animal in the zoo,” he said.

What do researchers know about the whole kingdom? Some of it is startling: One in four Americans visited a fake-news website between October 7 and November 14, 2016, according to a recent study from researchers at Princeton University, Dartmouth College, and the University of Exeter. And a massive study released this week from media scientists at MIT found that falsehoods travel faster, further, and deeper than accurate information on Twitter.

Yet no research has pointed to effective ways of reducing the spread of falsehoods online. Some still-unpublished studies have suggested that labeling fake news as such on Facebook could cause more people to share it. The same goes for relying on fact-checking sites like Snopes and Politifact. “Despite the apparent elegance of fact checking, the science supporting its efficacy is, at best, mixed,” say the authors.

At times, seeing a fact-checked rumor may cause people to remember the rumor itself as true. “People tend to remember information, or how they feel about it, while forgetting the context within which they encountered it,” they write. “There is thus a risk that repeating false information, even in a fact-checking context, may increase an individual's likelihood of accepting it as true.”

“People are not going to fact-check every sort of information they come across online,” said Brendan Nyhan, a professor of government at Dartmouth College and one of the authors of the recent Science essay. “So we have to help them make better decisions and more accurately evaluate the information they encounter.”

The fight against misinformation is two-fold, he told me. First, powerful individuals and popular Twitter users have to lead the fight against fake news and bad information.

“Research has found that people who are important nodes in the network play an important role in dissemination,” especially on Twitter, Nyhan told me. “Stories are being refracted through these big hubs. And I’m not a big hub, but I think it’s important to practice what I preach.”

Nyhan, who has about 65,000 Twitter followers, tries to correct incorrect information that he’s tweeted as quickly as possible, and he also tries to courteously notify other users when they’ve been tricked by unreliable information.

“We will all inadvertently share false or misleading information—that’s part of being online in 2018,” said Nyhan. “But I think we’ve seen people in public life be wildly irresponsible.” Users who repeatedly share bad information or fake news should suffer “reptuational consequences,” he said.

He specifically criticized Laurence Tribe, a widely respected Harvard Law professor who has argued dozens of cases in the Supreme Court. Tribe also has more than 300,000 Twitter followers. “He’s one of the most important constitutional-law scholars in the country, but he has repeatedly retweeted the most dubious anti-Trump information,” said Nyhan. “He’s gotten better, but I think what he did was irresponsible.”

(In an email, Tribe responded: “I do my best to avoid retweeting or relying in any way on dubiously sourced material and assume that, with experience, I’m coming closer to my own ideal. But no source is infallible, and anyone who pretends to reach that goal is guilty of self-deception or worse.”)

But individuals can never fight fake news or bad information by themselves, Nyhan said. Which led him to his second point: that online platforms like Facebook, Google, and YouTube have to work with researchers and civil-society organizations to learn how to combat the spread of falsehood.

“There are lots of people in these companies trying to do their best, but they can’t solve the problem of our public debate for us, and we shouldn’t expect them to,” he told me.

“We need more research about what works and what doesn’t on the platforms so we can be sure they are intervening in an effective way—but also so we can make sure they’re not intervening in a destructive manner,” he said. “I don’t think people take seriously enough the risks of major public intervention by the platforms. I don’t think we want Twitter, Facebook, and Google deciding what kinds of news and information are shown to people.”

“This,” he said—meaning fake news, falsehood, and the entire debacle of unreliable information online—“is not strictly the fault of the platforms. Part of what it’s revealing are the limitations of human psychology. But human psychology is not going to change.”

So the institutions that buttress that psychology—the journalists and editors, the politicians and judges, the readers and consumers of news, and the programmers and executives who design the platforms themselves—must change to accommodate it. Abraham Lincoln once said that one of the great tasks of the United States was “to show to the world that freemen could be prosperous.” Now, Americans and people all over the world must show that they can use every technological blessing of that prosperity—and remain well informed, enlightened, and liberated from falsehood themselves.