Doc Searls, the founder of ProjectVRM at Harvard, which works on issues of standards and protocols for technology, says that the world that Facebook and some of its other social-media brethren inhabit, which includes mining users’ every interaction on a platform for data about who they are and what they are interested in—is an increasingly appealing option for advertisers, but a potentially problematic one when it comes to protecting users’ rights.
The advertising these platforms offer is a significant departure from how marketing worked for a long time, Searls says. “An important thing about advertising of the traditional kind, the kind that Madison Avenue practiced for more than 100 years, is that it's not personal. It's aimed at large populations. If you want to reach black people, you go to Ebony back in the day. And if you wanted to reach camera people, you went to a camera magazine,” he told me. “The profiling was pretty minimal, and it was never personal.”
Prior to civil-rights laws, advertisers could be blatant about who they were trying to attract or reject. They could, for instance, say that minorities weren’t allowed to move into a neighborhood, or that women weren’t invited to apply for jobs. That meant that minorities and women endured less-favorable options when it came to housing, loans, and jobs. The Fair Housing Act, enacted in 1968, and the Equal Credit Opportunity Act, enacted in 1974, made it illegal to withhold promotions for housing or credit, or differentiate offers, based on characteristics such as race, ethnicity, or sex.
These laws, along with the fact that many ads are never actually vetted by human eyes, but rather run through an algorithm before posting, makes the culpability of Facebook and other social-media platforms hard to determine, in a legal sense. “The question of when, if ever, Facebook as the platform that carries those advertisements becomes legally complicit is complex,” says Rieke.
When it comes to assessing culpability in the realm of online discrimination, the Communications Decency Act is often used to determine whether or not internet platforms are at fault for illegal content that appears on their sites. The law, passed in 1996, essentially says that platforms that host a ton of user-uploaded content, such as Facebook, YouTube, or Craigslist, can’t generally be held responsible for a user posting something that is discriminatory, according to Olivier Sylvain, a professor at Fordham Law School.
But posting paid advertising that violates anti-discrimination laws is different, Sylvain says: “They are on the hook when they contribute one way or another in their design and the way in which the information is elicited.” One example that helps to illustrate the limits of the protections offered to companies by the Communications Decency Act (CDA) involved a website called Roommates.com. The platform, a forum to help individuals find roommates, was sued for violating the Fair Housing Act by allegedly allowing for gender discrimination in housing. A court ruled that because the site’s design required users to fill in fields about gender in order to post, it couldn’t rely on immunity offered by CDA as a defense. Roomates.com ultimately won its lawsuit, but the platform now makes adding information about gender optional. (Roomates.com did not respond to a request for comment.)