It just got harder for fake-news websites to make money from ads. Within hours of each other on Monday evening, Facebook and Google both announced that sites that intentionally deceive or mislead visitors won’t be allowed to use the internet giants’ advertising platforms.

Google’s AdSense and the Facebook Audience Network allow websites to easily place digital ads on their pages. Instead of going out and selling ads on their own, websites can lean on Google and Facebook to do that legwork for them. The networks act as middlemen, allowing online advertisers to “bid” for space on websites that have signed up. Google and Facebook share some of the revenue from the ads with the websites that run them.

Google said Monday that it will no longer allow websites access to its ad network if they “misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose.” Pornography or hate-speech websites are already banned from using the AdSense platform.

Facebook, too, tweaked its policies for its Facebook Audience Network, although the company insisted the change was just a clarification. The company’s policy already banned apps and sites with “illegal, misleading, or deceptive” content, a spokesperson said. On Monday, the company updated the document to “explicitly clarify that this applies to fake news.” The spokesperson would not say whether any publishers would be removed from the platform because of the change.

Both companies’ announcements were first reported by The Wall Street Journal.

The changes may put the squeeze on fake-news websites that depend on Google or Facebook to sell ads; together, the two companies control a significant chunk of the digital advertising space. But they don’t address the chief criticisms around their role in circulating and legitimizing misinformation.

Since the U.S. election, Facebook has been on the defensive about the volume of fake news shared on its site. Its CEO, Mark Zuckerberg, said it’s “pretty crazy” to think that fake news on Facebook influenced in election “in any way,” and wrote that more than 99 percent of content on Facebook is “authentic.” Critics said there’s no way of verifying that statistic.

The site is already trying to quietly suppress misinformation using machine learning. Its algorithms flag a link as fishy if Facebook users question its veracity in the comments, such as by posting links to fact-checking websites debunking the story. Once a link is flagged, any post that includes it will be less likely to appear on people’s news feeds throughout the site. (I suggested last week that Facebook consider employing human fact-checkers to vet the most questionable links shared on its site.)

Google, which has largely avoided being dragged into the debate over fake news—its robust algorithms favor credible sites with links to other trusted domains—suddenly found itself in the same spotlight on Monday, when the top hit in a search for “election results” falsely claimed that Donald Trump had beaten Hillary Clinton in the popular vote.

“The goal of Search is to provide the most relevant and useful results for our users,” a Google spokesperson said. “In this case we clearly didn’t get it right, but we are continually working to improve our algorithms.”

The change the company announced to AdSense on Monday doesn’t affect the algorithms it uses to rank search results.