There Is No Such Thing as Private Data

If you need credit or a place to live, companies may try to persuade you to give up even the most intimate information in your social media accounts.

Rick Wilking / Reuters

Just a few years after social media began to permeate students’ high-school experiences, the terrifying rumors began to spread: a liberal arts college that rescinded an admissions offer after coming across a misguided Facebook post; competitive applicants that send “anonymous tips” to admissions committees with links to photos from a rival’s weekend shenanigans.

Students began diving into their Facebook privacy settings to scrub their walls, timelines, and photo albums of any remotely incriminating public content. Rather than sanitizing their profiles, some chose to change their names on Facebook to avoid scrutiny, or even to delete their accounts altogether, at least until after they get their last acceptance letter.

But admissions committees quickly got wise to the tricks. In an effort to take an unvarnished look at applicants’ private lives, admissions officers, college-sports coaches, and hiring managers started to demand to be let into candidates’ social media accounts.

Schools and employers used several techniques to gain access to applicants’ accounts, according to a 2012 report from MSNBC’s Bob Sullivan. The Maryland Department of Corrections required candidates to log into their Facebook accounts and click through their profiles as an interviewer watched over their shoulder. Some colleges required that applicants add recruiters as Facebook friends so that they can access protected content, and a police department in North Carolina included a space on a paper job application for the candidate’s Facebook username and password.

In the intervening years, groups like the ACLU pushed back on the practice of demanding passwords, and some states passed legislation banning it. Today, you probably won’t be asked to hand over your passwords when applying to college, a job, a loan, or a lease—but that doesn’t mean your private social data isn’t still fair game.

A wide range of companies have tried to come up with ways to assess a person’s reliability and creditworthiness using nontraditional data. For some, the goal is to extend the benefits of credit to those who don’t generally have access to it—perhaps because they don’t have an official credit score at all. Others aim to provide lenders, service providers, and hiring managers with a more complete picture of a person’s background before deciding whether to trust him or her.

Both are worthy goals, but the ways companies chase them have raised eyebrows.

Last year, Facebook was awarded a patent for a system that would scan the credit scores of a user’s friends in order to help a bank decide whether or not to award a loan to that person. The product never got off the ground, but it raised questions about trusting an algorithm that sniffs its way through a person’s social network to come up with a risk rating. People are likely to be friends with people from similar socioeconomic and ethnic backgrounds, so a network analysis that developed a credit score based on one’s peers could help reinforce class distinctions.

If Facebook had gone through with its plans, it could have landed in a potentially dangerous legal gray area. Consumer reporting agencies like credit bureaus are subject to regulation under a set of laws that the Federal Trade Commission enforces. In 2012, a data broker called Spokeo paid an 800,000 dollar fine and entered into a consent decree with the FTC in order to settle charges that it violated consumer-protection laws. The company assembled detailed personal information about individuals from credit agencies, social networks, and other sources, and sold the data to human-resources departments looking for a leg up in hiring.

Because of stringent credit-reporting laws in the U.S., alternative methods of developing credit have generally gotten more traction overseas. In parts of Africa and Latin America, companies are monitoring cell-phone use, social media, and even typing patterns to try and assess a person’s creditworthiness.

Most recently, a British startup has stirred the pot with a product called Tenant Assured. The service is aimed at landlords who want to check up on potential tenants before leasing out an apartment, and it works by connecting to applicants’ social networks: Facebook, Twitter, Instagram, and LinkedIn.

Once a candidate grants the app permission to access his or her accounts, an automated spider crawls through “millions of data points, both public and private,” according to the company. It can read both public posts and private messages, and uses a variety of “cutting-edge techniques” to deliver inside information about the individual’s personality traits, hobbies, and potential risks they might pose to the property (like smoking or owning pets).

“If you’re living a normal life, then, frankly, you have nothing to worry about,” the site’s co-founder, Steve Thornhill, told the Washington Post’s Caitlin Dewey last week.

(Thornhill told The Verge that Tenant Assured has no plans to expand to the U.S.)

Tenant Assured’s website emphasizes the fact that its information-gathering is voluntary and requires a participant’s permission. “Your applicant is invited to connect from one to four of his/her social media accounts through our secure portal,” the website’s FAQ page reads.

But, as with nearly every approach to credit reporting, the choice is largely out of an individual’s hands. If a prospective tenant chooses to opt out of Tenant Assured, for example he or she will likely not be considered alongside those who choose to participate. The same goes for any ostensibly voluntary system that uses surveillance to establish credit: Any person who wants (or needs) access to a service protected by a credit-reporting program has little choice but to submit their private information for perusal.

Low-income people who don’t have a credit score are most likely to be cornered into participating in a system like this one—but they’re also the group that’s most likely to be harmed by big-data analysis that often gets crucial details wrong. (Tenant Assured, for example, reads every post and private message it’s allowed access to, and flags every instance of “risky” words like “loan” or “pregnant” that it finds, whether or not a user was talking about their own personal experience.) However, the loss of control over private data—though unfavorable—may still be worthwhile for people who have few other gateways to essential goods and services.

And if alternative credit reporting takes off, social-media users of all stripes would be wise to remember that a post that’s private today may one day become fodder for a computer algorithm that will decide whether or not you deserve a loan, a house, or a job.