In a city or town, a quick look around will tell you the racial makeup of the community you're in. But on a webpage, there’s no easy way of telling who else is visiting. Some sites make it clear that they’re geared toward members of a certain race: The Root, for example, describes itself as a destination for “black news, opinions, politics, and culture.” Elsewhere, visitors have to guess a site’s target audience based on its content—or they may conclude that race doesn’t matter on most of the internet.
But that latter idea is one that a group of academic researchers who study race and the internet have been pushing back against for decades. With training in different backgrounds—sociology, media studies, internet culture—they contend that the internet is far from raceless; in fact, they say, most of the internet is targeted at one demographic in particular.
Because of its history as a product of technology companies that are staffed overwhelmingly by white employees, the internet is largely made by, and for, white people, the researchers argue. “Those with the most access and capital are more likely to control the culture of the internet and reproduce it in their interests,” said Safiya Noble, a professor of information studies at UCLA who has published research about examining the role of race in social media and search engines. “The web is a white space and its sensibility otherizes non-whites.”
Internet scholars have been kicking around this idea since the early days of the World Wide Web, but it’s a particularly difficult one to test experimentally. Unlike studies that catalog how discrimination leads to generations of segregation in physical spaces—redlining in major American cities, for example—it’s not as easy to detect similar patterns on the web.
This year, Charlton McIlwain, a professor of media, culture, and communication at New York University, gave it a try. Applying theories that are usually used to study geographical segregation, McIlwain examined how people navigated through the internet to try and understand whether web traffic is segregated.
He approached the experiment with two questions. First, he wondered whether explicitly racial sites—sites that describe themselves in racial terms—link mostly to other racial sites, and non-racial sites link mostly to other non-racial sites. Second, he wanted to know how people moved between those sites, and whether or not they regularly hopped between those categories.
He began with the 56 “Top Black Sites” as chosen by Alexa, a web analytics company that ranks pages based on traffic and popularity. He used a software program that examined the network of links emanating from those sites, and ended up with more than 3,000 pages with nearly 16,000 links between them. (Those connections can take the form of hyperlinks in a news story, for example, or on a personal blog post.) Then, he used another analytics service to categorize those sites as “racial” or “non-racial” based on the way they described themselves for search engines.
When McIlwain looked at how the 3,000 sites he’d chosen were connected to one another, he found that non-racial and racial sites linked to each other in relatively equal measure: Neither type of site had a significant bias toward similar sites.
But divisions began to crop up when it came to how visitors actually navigated between the sites. McIlwain found that people who usually go to non-racial sites tend to visit other non-racial sites; similarly, visitors to racial sites preferred to click on other racial sites.
I asked McIlwain what it means that internet users self-segregate as they browse the web. He rephrased my question in terms of geography: “Why, when there’s a pathway to a different neighborhood, don’t I go there?” The answer, he thinks, has to do with the quiet ways that any space, virtual or physical, signals to visitors about itself.
“One has to look for the subtle, perhaps unintentional ways that sites are projecting a message,” he said. “‘This is an exclusionary place; this is a place that is not really meant for you. Yes, you have access—there’s a highway to get here—but we really don’t want you here, and there’s nothing for you here, anyway.’”
“I, as a person of color, may say, ‘Look, I know what is ‘for me,’ and those are a limited number of sites,’” McIlwain continued. “And that’s where I draw my boundaries.”
Search rankings also play an important role in segregating web traffic, McIlwain says. When I searched “news” on Google, I clicked through the first ten pages of results without seeing a single news site focused on race. Generally, search algorithms appear to favor non-racial sites, which researchers theorize are heavily skewed toward a white perspective.
Noble’s previous research has focused on the effects of search-engine algorithms, which are among the most influential factors in how a person experiences the internet. “People query the web in the context of their racial identities,” Noble told me. “But they are served websites that are often racially and gender-biased against women and people of color.”
Google serves up millions of search results every minute, and the finely tuned algorithms it uses to do so are the secret to its success. But every algorithm was, at some point, tuned by human hands. Just like big-data hiring and policing can be affected by hidden prejudices unknowingly baked into systems by human engineers, search systems reflect the values of the people that constructed them. Silicon Valley often takes a colorblind, technology-first view of the world—an outlook that, in attempting to erase the harms of discrimination, can perpetuate them—and those principles are apparent in its companies’ products.
“I think Dr. McIlwain’s research is critically important because it provides more evidence to dispel the notion that the internet is a democratic space,” Noble said. She, McIlwain, and their academic peers make the case that an open and equal internet should include different shades, voices, and backgrounds, rather than trying to wash out color altogether.