How much of a responsibility does the Internet’s gatekeeper have to protect the people who pass through its doors?
That’s the question that faces Google and other search engines, as data brokers increasingly generate detailed pictures of people based on their online activity—and predatory companies, in turn, use that information to target vulnerable consumers.
After I wrote about this issue yesterday, and specifically focused on lead generators and predatory lenders, Google offered to comment on how it is thinking about its role.
The problem is this: Lead generators sell consumer data, often to companies who want to issue short-term loans at high interest rates to people who Google terms like “need money fast.” Google says it doesn’t give data about search queries directly to advertisers, but googling something like “can’t pay rent” might still prompt the display of an ad that directs web users to a lead generator’s website, which prompts visitors to share sensitive information in exchange for help.
Once a person voluntarily enters information this way, Google can’t control how data brokers use it. But the company is trying to do what it can, a spokesperson told me.
“In 2012 we instituted new policies on short-term loans and we work hard to remove ads or advertisers that violate these policies,” said Crystal Dahlen, a spokesperson for Google. “If we become aware of any ads that violate our policies we immediately take action.”
This sounds good in theory, but it's exceedingly difficult in practice. Google uses a robust mix of human and machine analysis to trawl the web for ads and websites that violate its policies, but those who are interested in breaking the rules are constantly adapting to evade detection. The even bigger challenge, and one that underscores the need for involvement from lawmakers and regulators, is that Google’s policies are based on state laws, which means Google responds to the same advertisements differently in various areas of the country.
It also means Google relies heavily on leadership from officials at the local level, for better and for worse.
In Vermont, for example, there are strict rules about lending, including an outright ban on payday loans. The attorney general there has made it a priority to crack down on illegal lenders. State officials have also identified lenders and payment processors who violated the law, and shared that information with search engines so they could strip illegal advertisements from the web.
But because laws differ state by state, Google might only disallow advertising from those businesses in the state of Vermont—as long as the ads still complied with Google policies. Which means the ads delivered to a web user in Vermont, would be different than what a person searching the same terms in another state might see. (This is on top of the fact that Google’s algorithm already serves up distinct results to individuals—even people in the same room googling the same thing—based on past browsing history, location, and other data.) There’s evidence, too, that this layer of protection doesn’t even work in the states with the toughest consumer-protection laws.
Consider this finding, from an October report by the tech-policy consulting firm Upturn:
To test how payday lead generators were using major ad platforms to advertise, we ran a series search queries on Google and Bing (including, for example, “payday loan,” “need a loan fast,” and “need money to pay rent”) from internet protocol (IP) addresses originating in states with strong payday lending laws (including Pennsylvania, New York, and Vermont). In each jurisdiction, we saw many payday loan ads commissioned by lead generators.
And so, an already complicated problem becomes even more convoluted. That hasn’t stopped consumer advocates from seeking solutions. Last week, the Federal Trade Commission held a workshop about issues related to online lead-generation. Many of those involved agree that regulators, lawmakers, and those who are buying and selling data must lead the way in protecting consumers.
In other words, it isn’t only up to Google. That’s in part because search engines can establish robust policies against bad advertising without having—and, arguably, without being reasonably expected to have—enforcement mechanisms. This doesn’t mean Google isn’t responsible for the predatory ads it hosts. It is. But also: Google can’t erase them from the web by itself.
“We continue to be vigilant in our efforts to protect users against bad advertising practices,” said Dahlen, the Google spokesperson. “In 2014 we disabled more than 524 million bad ads and banned more than 214,000 advertisers.”
But are those efforts enough? Many consumer advocates say no. Having good intentions, and even taking action against hundreds of thousands of advertisers, isn't enough to protect people from widespread data abuse.
“It’s not a secret network of evil-doers,” said Michael Waller, an attorney in the Bureau of Consumer Protection’s enforcement division at the FTC's workshop last week. “The information is coming from all kinds of different sources. And so responsible players are likely in the chain.”
Waller and others say lawmakers and regulators should rethink their role in solving the problem, and consider a variety of bold options—including but not limited to extending the Fair Credit Reporting Act to encompass use of consumer data, and establishing a data tax for companies who buy and sell consumers’s information.
“The trouble here is the data is so potentially toxic and dangerous, so much can be done with it,” Waller said. “Once you've got the data, you're going to want to do something with it ... There is this pressure to monetize that data. And it’s a big problem.”