In India and Russia, FICO—the company behind the popular FICO credit scores—is partnering with startups like Lenddo to capture information about users from their cellphones. Lenddo uses locations reported by applicants’ phones to figure out whether they really live and work where they say they do, and then analyzes an applicant’s network to figure out “if they are in touch with other good borrowers—or with people with long histories of fooling lenders,” Bloomberg reports.
The push to consider more types of information can help people who lack credit scores, or who might not have the usual indicators of creditworthiness, access loans and bank accounts that might otherwise be closed off to them.
But the more complex and opaque these powerful algorithms get, the more ways there are for people to be disqualified from job searches and loan applications—and the harder it is to know why. What’s more, systems that take into account the actions of people’s family and friends risk assigning guilt by association, denying opportunities to someone because of who they’re connected to. They can decrease a person’s chance for upward mobility, based solely on the social group they find themselves in.
Someone living in a low-income community, for example, is likely to have friends and family with similar income levels. It’s more likely that someone in their extended network would have a poor repayment history than someone in the network of an upper-middle class white-collar worker—if a scoring algorithm took that fact into account, it might lock out the low-income person just based on his or her social environment.
The Equal Credit Opportunity Act doesn’t allow creditors in the United States to discriminate based on race, color, religion, national origin, sex, marital status, age—but taking into account a person’s network could allow creditors to end-run those requirements.
A 2007 report from the Federal Reserve found that blacks and Hispanics had lower credit scores than whites and Asians, and that “residing in low-income or predominantly minority census tracts” is a predictor of low credit scores. Since people are likely to have friends and family that live nearby and are the same race, using social networks to rate their creditworthiness could reintroduce factors that creditors aren’t allowed to consider.
In an essay published in 2014 by New America’s Open Technology Institute, three privacy researchers—danah boyd, Karen Levy, and Alice Marwick—wrote about the potential for discrimination when algorithms examine people’s social connections:
The notion of a protected class remains a fundamental legal concept, but as individuals increasingly face technologically mediated discrimination based on their positions within networks, it may be incomplete. In the most visible examples of networked discrimination, it is easy to see inequities along the lines of race and class because these are often proxies for networked position. As a result, we see outcomes that disproportionately affect already marginalized people.
Preventing algorithmic discrimination is a challenge. It’s not easy to hold companies to the laws that would protect consumers from unfair credit practices, says Danielle Citron, a law professor at the University of Maryland. “We don’t have hard and fast rules. It’s the Wild West in some respects.”