This past winter was an exceptionally strange one across North America. Rain deluged California, as unseasonable warmth fanned across the Midwest and Eastern seaboard. In New York, sales of salt and snow shovels plunged; in Washington, some of the famous cherry trees bloomed too early and died.

When the weather gets weird, many people now think of climate change. And nearly as many people know, too, that it’s hopeless to try to figure out if it’s to blame. For decades, journalists have repeated that weather and climate are different—weather deals in specifics and climate in probabilities—and so no individual weather event can be attributed to climate change.

They may have to stop repeating that line soon. Less than a week after the heat wave ended, the non-profit Climate Central announced in the pages of The New York Times that anthropogenic global warming was likely to blame for the early warmth. A February warm patch is three times as likely in today’s climate as it was in the climate of 1900, the organization’s World Weather Attribution team found.

The technique behind this finding is called rapid attribution, and it works by comparing observed meteorological data against data from climate models. A study published this month in the Proceedings of the National Academy of Sciences attempts to offer a new standardized and conservative approach for tying individual weather events to human-caused global warming. In doing so, it also comes to some significant findings of its own.

Chief among these: Across 97 percent of the observed area, the hottest day of the summer and the hottest month of the summer are becoming hotter due to human-caused climate change. In the tropics, global warming has made a new hottest-month temperature record four times more likely to happen than it would otherwise. It has also made the tropics twice as likely to experience the driest year on local record.  

“In most places, when you get a heat wave now, there is probably a human finger print,” says Daniel Swain, an author of the paper and a climate scientist at the University of California, Los Angeles.  

“It’s the first time that this has been done at this large scale,” says Heidi Cullen, the chief scientist at Climate Central and a leader on the World Weather Attribution project. She was not involved in the study. “It helps us understand at this 50,000-foot level the way that climate change is already having an impact on temperature and participation.”

“The public discourse didn’t recognize that the scientific community has come up with these tools for individual events,” she says. “Our goal is to provide a first, best answer to the questions that the public is asking after an extreme event.”

Rapid attribution isn’t yet close to perfect, and it remains harder to attribute an extreme event to climate change when the event is more complex, Swain told me. The recent California drought is a good example. (Swain writes the California Weather Blog in addition to his published research.)

“The answer you get definitely depends on the question you ask,” he told me. For example: If researchers compare the drought to longterm trends in precipitation, then the drought doesn’t seem tied to human-caused climate change. But if they check the drought against precipitation trends and temperature extremes, then it suddenly seems so anomalous as to be warming-triggered. And if they also look at the high-pressure patterns that may have caused the drought in the first place, they get a third, more ambiguous answer about its relation to climate change.

“But the absence of evidence isn’t evidence of absence,” Swain says. “What we can now say for heat extremes is that that argument—that there’s an absence of evidence—goes out the window. For other sorts of events, in other regions, there is still an absence of evidence.”

Beyond the actual attributional work that it does, the paper’s more lasting contribution may be methodological: It proposes a multi-step process to attributing weather to climate change. First, scientists should identify whether there is an underlying trend in the historical weather data for a certain place. Then, they should determine how much that trend contributed to the “extremeness” of any one weather episode. Finally, they should compare that magnitude with two climate-model experiments: whether a changed climate, rich like ours in atmospheric carbon dioxide, would produce a similarly extreme trend; and whether a historically “normal” climate, similar to one from 1850 or 1900, would.

There are some limitations on the new paper’s findings that don’t apply to its methodology. It only used one model—the Community Earth System Model from the National Center for Atmospheric Research—to test whether certain trends were climate-driven. Swain says that they hope to bring more models into the process soon; Cullen says that her team uses an ensemble of models when attributing an event.  

Because the technique requires an excess of data, it only works in places where there is a decades-long temperature record. Therefore it focuses on well-populated and historically developed places—North America, Europe, Asia, and the South American coasts—and does not look at recent temperature trends over much of the oceans.

For the most populated parts of the globe (and certainly for the continental United States), it’s possible to imagine a world where estimates of climate-changedness become a part of the meteorological landscape. (Especially if machine-learning algorithms continue to improve at estimating the output of a climate model without requiring it to be fully run again.)

On some warm February day of the future, maybe your phone’s weather app will tell you not only how hot it will be—but how much likelier such a record-breaking day is in a climate-changed world.