Did you hear the thing about the Florida woman who implanted a third breast in order to be "unattractive to men"? The one who is filming "her daily life in Tampa to show the struggles she faces because of her surgery"?

She didn't, and she is not. The whole thing—for better or for worse—was a hoax.

The Internet moves quickly. Rumors emerge, intentionally and not; they spread, intentionally and not. There's a reason, of course, that "wildfire" is such a common metaphor when it comes to describing this stuff: Rumors, once sparked, don't just spread extremely quickly; they are also extremely difficult to contain. And on top of everything else, it is extremely hard to predict which direction they'll take as they spread.

* * *

Enter Emergent.info. The site, launched today after two months of testing and data-gathering, is hoping to change that by tracking rumors that arise in (pretty much) real time. As Craig Silverman, the rumor researcher who created the site, told me: "It's aiming to be a real-time monitoring of claims that are emerging in the press."

Emergent.info works through a combination of human and algorithmic processing. Silverman and a research assistant find rumors that are being reported in the mainstream media—most often, stories that bubble up through social media and get spotted by one outlet … and then, from there, picked up by other outlets. Then they search Google News, which aggregates various news outlets' take on the story. They gather those stories and enter them into their database, classifying them according to the outlet and to how each of the outlets is reporting them. Some will report rumors not as rumors, but rather as simply true or simply false; the majority, however, simply report the fact that they have heard them. Often they'll hedge that repetition with caveats like "Sources:" or "Rumor:" or "Unconfirmed:" in headlines or texts. Just as often, however, they'll be more subtle in their warnings. In the case of the Tri-Breasted Lady, many places simply repeated the rumor as fact, their main additional caveat being a well-placed "WTF."

Then their algorithm takes over. Emergent.info is essentially a web app, built on an API from the back-end database. (You can think of it as something of a data-driven version of Snopes—with a more expansive premise. "Snopes, they're amazing," Silverman says, "but they only do Snopes work; they don't aggregate what other people are doing. This site is able to identify claims and then actually see, okay, who's got the best information about it?") Every hour, Emergent.info's script crawls the stories to see whether their text has changed. The system also checks share counts to monitor how stories are moving through the social media ecosystem.

* * *

All of which, Silverman says, helps to answer questions that haven't been so systematically analyzed before. Among them: "What's the life cycle of a rumor in the press now? And how are news organizations dealing with things that are unconfirmed? And are they updating the stories, and are they sticking with it over time?"

Take the reports—false reports, it turns out—that Durex is making a pumpkin-spice condom: Through Emergent.info, you can see who repeated the rumor, who checked it, and who debunked it. Or take, more seriously, the stories that emerged last week claiming that a meteorite had landed in Nicaragua. (That rumor is still listed as unverified on Emergent.info, because no one has been able to prove that such a meteorite actually fell.) Or take, even more seriously, the claim made earlier this month that the ISIS leader Abu Bakr al-Baghdadi had been killed in a U.S. airstrike. He had not; Emergent.info lists the rumor as "confirmed false."

A challenge news organizations face when it comes to rumor-reporting in particular is the fact that rumors tend to be much more shareable—and much more clickable—than corrections. Take your friend and mine, Ms. Tampa Triple-Breast (self-given pseudonym: Jasmine Tridevil). One of the early stories about her, in the New York Post, got 40,000 shares. The Snopes article (mostly) debunking the initial story had 12,500 shares—a decent amount for a story that is, technically, a non-story.

For the most part, though, the articles debunking the rumor get extremely little attention. (For a more crystalline example of all that, you can look to Buzzfeed's coverage of the story. Its initial story got more than 30,000 shares; its debunking of that story got just over 1,000.) Which means that news organizations often have very little incentive—direct, commercial incentive, at least—to put their time and energy into them. As a result, as Silverman puts it: The Total Recall rumor is "a story that, I would argue, the average person probably doesn't know is not true."

The larger problem with all that is that rumors, once they're put out there into the maw of the media, are notoriously hard to correct. There's the fact that "sorry, just kidding about that three-boobed lady thing" is nowhere near as sharable as a "whoa, three-boobed lady!" thing in the first place. But there's also the fact that there is very little uniformity among media outlets about how updates, corrections, retractions, and the like should be presented to readers. Most outlets will simply update a story that contains a debunked claim; a few will write new stories altogether, linking to the previous one in the process. That can leave readers, however, in a kind of epistemological limbo: You're never quite sure what's been verified and what has not. Trust is a precious resource in journalism; many outlets haven't fully figured out how to preserve it.

"So much of this stuff is public before news organizations get to it," Silverman points out. "So that's a very different dynamic from what used to happen. So if something is by default public, how do you decide when you're doing to point at it in a way that's responsible? And then how do you deal with it as it sort of takes its life path to being true or false?" Bringing some data to bear on those questions, he's hoping, will help news outlets start to answer them.