It seemed straight out of the evil-tech-company playbook.
In August, Facebook secured an otherwise innocuous U.S. patent about how to analyze a user’s friend network to let them do something. Most of the patent discusses the fairly mundane technicalities of running a social network—until, last in a list of examples, there appeared the following paragraph:
When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual […]. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.
In other words: The patent would let a bank analyze your Facebook friends when you applied for a loan. If too many of your friends have poor credit histories, the bank could reject your loan application—even if your own credit was fine.
Some critics argue that this patent could resurrect historic discriminatory loan practices: “Facebook Wants to Redline Your Friends List,” said one headline. And it makes a certain sense. In 20th-century redlining, banks would deny mortgages to people because they lived in neighborhoods that were too black. But in redlining’s spiffy new 21st-century form, they argue, banks can do one better: They can deny loans to people not because of where they live, but because of whom they fraternize with. Since one’s friends so closely mirror one’s race and class—according to one study, nine out of 10 of the average white American’s friends are also white—the practice would effectively restore loan discrimination. (It’s worth adding that traditional redlining was encouraged, even dictated by, the U.S. government.)
Will our friends soon be shaping our credit profiles—and would such a scheme even be legal? After talking to Aaron Rieke, the director of tech policy at Upturn, I’m not so sure. Upturn consults civil-rights and social-justice groups on how to understand and use technology. Last year, Rieke wrote a report on new forms of financial scoring, including social-network-based ones like what Facebook is proposing.
In three words, Rieke’s message is: Don’t worry yet. Rieke thinks it’s very unlikely that Facebook would implement the ideas in the patent.
There are a few reasons for that. The first is practical: Right now, Facebook relies on users to voluntarily give it their data.
“Facebook makes its money by encouraging people to have large friend networks and create lots of content for it to show ads against,” he told me. “It would really surprise me if they decided to get into the credit-scoring business, just because I think that’s going to make people feel panicked and uncomfortable.”
But Facebook would also face significant legal obstacles. By selling information to lenders, Facebook could fall under the purview of the Fair Credit Reporting Act. Passed in 1970 to regulate a relatively new class of companies that consolidated consumer information—we know these companies now as the credit bureaus—the Fair Credit Reporting Act could overhaul Facebook’s regulatory responsibility. For example, Facebook might need to tell users much more information than it does now about how it views them and their demographic profile.
Facebook, however, would not be on the hook—at least to consumers—if its credit-score modeling turned out to be discriminatory. If a bank used a Facebook-derived score to approve or reject loans, the bank would have to make sure those loans complied with the 1974 Equal Credit Opportunity Act. The ECOA prohibits discrimination for a familiar collection of factors—race, gender, religion—and, on that basis, Facebook’s credit scoring looks straightforwardly fine. But the regulators who enforce ECOA believe that this prohibition extends to disparate-impact situations.
“A neutral policy or practice that disproportionately burdens a group of people on a prohibited basis is still illegal,” says Rieke. So if a bank tried to use Facebook’s friend-network-based credit scoring, he believes that bank would quickly face a disparate-impact suit.
“Someone would file a disparate-impact lawsuit against a creditor who used it, saying: I’m poor, my friends are poor; or, my friends are poor and therefore you denied me credit; or my friends are of a particular ethnic status and therefore I was declined; and there would be a disparate-impact lawsuit,” he said.
Then, “the creditor would have to show that this friend score actually predicted creditworthiness, and that there was no better way for them to score that person,” he said.
This would be a tough case for the bank, especially since many credit agencies right now say there is no better guarantor of someone’s ability to pay back a loan than their credit history. And the mere prospect of a lawsuit would make implementing the patent difficult for Facebook, as any bank that used its new friend-scoring method would shoulder a lot of liability. More traditional credit bureaus already certify to lenders that their scores will not result in disparate-impact suits.
Which isn’t to say that social-network-based credit is an irreparably bad idea. In countries that do not have America’s financial system, friend scores can help extend credit to those who need it. In Mexico, Columbia, and the Philippines, a company called Lenddo already analyzes someone’s Facebook, LinkedIn, and Twitter to gauge their creditworthiness.
And in the United States, where 20 percent of the population cannot access credit, Rieke believes we should not reject new approaches because they’re new.
“I don’t think that anyone who is interested in financial justice should just immediately, out-of-hand dismiss something that sounds new or different,” he told me. “However, it seems pretty unlikely to me that a scoring system rooted in the community you’re a member of is going to be helpful.”
This gets at just some of what I talked about with Rieke, who also delved into more of the regulatory and moral dimensions of new scoring schemes. I’ve included a transcript of our conversation below, edited and condensed for the sake of clarity.
I also reached out to Facebook about the patent but haven’t heard back yet.
Aaron Rieke: Let me just say that I think there’s three approaches to talking about this patent. There’s a legal answer, there’s a practical answer, and there’s a moral answer. And let me start with the practical answer, because I think that’s the shortest and the easiest. Which is, you know, Facebook makes its money by encouraging people to have large friend networks and create lots of content for it to show ads against. And given that that’s the primary profit driver for Facebook, as a practical manner, it would really surprise me if they decided to get into the credit-scoring business, just because I think that’s going to make people feel panicked and uncomfortable. If I were them, I would not be in a giant rush to do that.
So just from a purely practical standpoint of, ‘how does this company make its money and what are its interests?’, I think it’s relatively unlikely that we’re about to see a real-world public implementation of the idea. So take that for what it’s worth it. A lot of the stories I’ve read have said, ‘now we finally see Facebook’s big scheme, and this is it,’ and—like I said—due to the legal ambiguities that I’ll go into in a moment, and the fact that Facebook depends heavily on the comfort and trust of its users to make money, I don’t see them wanting to get into the credit-scoring business, especially in a way like this.
Robinson Meyer: Would they have to—I mean, they would have to do some pretty considerable notifying of users, too, before they actually rolled something like this out into the wild, right?
Rieke: Well, so, the idea in the patent is: When you, Rob, go apply for a loan, a lender could pull in an average, a credit score that’s the average credit score of your friend network, in some shape or degree. And that that analysis of kind of your association would be an indicator of whether they should extend to you credit. To the question of—the legal question—there’s two prongs to the legal question.
The first is, could a creditor legally use that kind of hypothetical friend credit score? Let’s set Facebook aside for just a moment. I’m a lender. There’s a law in the United States that makes it unlawful for a creditor to discriminate against an applicant on the basis of race, religion, national origin, sex, etc. Now at first blush, an average of my friends’s credit scores is not any of those prohibitive factors, right? At first blush, a creditor would say, well, a friend credit score, that’s not race, that’s not sex, that’s not marital status, it’s none of those things. But that’s not the end of the story, because the regulators that enforce the Equal Credit Opportunity Act claim that the disparate-impact doctrine works for credit. And disparate impact says, hey, a seemingly neutral policy or practice that disproportionately burdens a group of people on a prohibited basis is still illegal.
So what I envision would happen if such a credit score existed is that, very quickly, someone would file a disparate impact lawsuit against a creditor who used it, saying: I’m poor, my friends are poor; or, my friends are poor and therefore you denied me credit; or my friends are of a particular ethnic status and therefore I was declined; and there would be a disparate-impact lawsuit, I would guess.
And then the question would go back to the creditor, and the creditor would have to show that this friend score actually predicted creditworthiness, and that there was no better way for them to score that person. So the burden on a creditor who used a friend score like this, if a disparate impact was shown, the burden on a creditor would be to: (a) demonstrate that the score was actually predictive, and then (b) show that there’s not a better, less discriminatory way to do this. That’s how the story would play out? And I think if I were a creditor, I wouldn’t be confident about either of those things. So that’s kind of the pure legal analysis for the creditor.
Facebook itself has its own set of legal things to think about, if it were to start doing this. Whereas the Equal Credit Opportunity Act applies to the creditor, Facebook would be thinking about the Fair Credit Reporting Act. The Fair Credit Reporting Act is a law passed in 1970. In 1970, these credit bureaus like Experian, Equifax, and TransUnion were just starting to grow up. Congress saw that these credit bureaus were getting huge and said, these companies are collecting lots of information about consumers and then selling that information to be used to make credit decisions and employment decisions and things like that—they should really have some rules.
So the Fair Credit Reporting Act regulates companies that are classified as consumer reporting agencies, so if you’re a company that collects data about consumers for the purpose of selling that data for eligibility decisions, you have a responsibility to make sure that data’s accurate, up to date, accessible to consumers, disclosed in only limited circumstances. You have these data-management requirements that fall upon you, accuracy being one of the most important.
If I were a Facebook lawyer, and I was thinking about implementing this patent, I would be thinking this makes me look a lot like a consumer-reporting agency. That is, I’m selling data about consumers for the purpose of credit decisions, and are we as Facebook suddenly have to take on new regulatory compliance requirements?
That’s the short story—the long story is that the FCRA, being a 40-plus-year-old law, doesn’t map super cleanly onto a model where a business like Facebook is getting most of its data first-hand from you and me and our friends. So it’s not crystal-clear how or whether the FCRA would apply to Facebook. That’s just to say that if Facebook got into the business of selling data about individual people for credit-determination purposes, that that would make them look a lot like a credit bureau, and they would have to think a lot about how new legal and regulatory requirements might apply to them.
Meyer: Cool. Cool, cool. This is a question I should know, but I want to make sure it matches your understanding. Is Facebook selling their data—I guess their data qua data—to other companies right now? My understanding is they’re not even doing that. You can advertise with Facebook, and that is how you tap into their reams of user data, but there’s no way to go get that data unless you approach Facebook from an advertiser perspective.
Rieke: To my knowledge, that’s correct. I think of both Google and Facebook as the huge, first-party Internet companies, and I think both have the policy of, we don’t sell our users’ data to others. The way you get to the users is submitting advertising requests and saying which segments of people you want to reach, and then they do that.
Meyer: Cool, that was my understanding as well, but I realized I had been treating it as ground for so long that I hadn’t thought about it as figure in a while.
Rieke: Yeah, and I think that goes to an important point that you made earlier, which is: In a world where Facebook suddenly wanted to sell data to its users, even if just to say, hey, the average credit score of Rob’s friend network is 700, that would be a bit of a departure from how they currently manage user data.
I want to add the complexity that, in other countries, there are companies that do credit scoring based on Facebook data. And the way they do that is to say, hey, Rob, you want a loan? We’re gonna ask you, Rob, to share your Facebook data with us. And because you’ve given us that data, we’re gonna run some algorithm on it to generate a credit score.
Meyer: What countries are those?
Rieke: I think the Phillipines. There are two companies—one is Lenddo, and the other is Kreditech. Both of these companies are credit-scoring companies. They operate for credit-scoring purposes exclusively abroad, in countries that don’t have the same kind of financial regime that the U.S. has. They’re gonna do, purely Facebook-based credit scores. And it’s just important to point out: These guys don’t practice in the U.S.—(a), I think because of regulatory risk that we’ve already talked about, and (b), because other countries don’t have the robust credit-reporting industry that we have in the United States. I’ll put an asterisk next to that only because there’s a lot of [Americans] who don’t have data in the credit bureaus. But we have a lot of people who do have data in the credit bureaus, whereas in other countries that’s not true.
Meyer: And are they using techniques similar to these, where they’re assessing your network?
Rieke: That’s a good question. The answer is yes, they’re also assessing your network in some way, shape, or form. I think that’s one of the hardest things to capture here as a nuance. The patent, to me, sounds like Facebook would itself be offering some kind of friend-based credit-score service. But another way to read the patent is that someone could—with the consent of you, Rob—access your Facebook data because that’s something you could offer them.
Meyer: I mean, is there a possibility here—and this is a real guess—my understanding is American intellectual-property regimes go further than our credit-scoring regulation. If they patented it, could they then use their monopoly on that patent to actually prevent it being developed elsewhere?
Rieke: Yes if Facebook was of a mind to. I don’t know how well they could enforce this patent overseas—I don’t know the answer to that off the top of my head. But yeah, if you’ve been granted a patent for a practice like this, and you’re feeling litigious, and a startup company was doing this kind of thing, Facebook could say: Hey, no, sorry, that’s our invention. You shouldn’t do that without a license first. If you started a startup company tomorrow that did what this patent said by asking for access to their Facebook accounts, Facebook might send you a mean letter.
Meyer: You were going to get to the moral angle, and I think I blocked you from it.
Rieke: So the moral answer here is, two things. Number one, even though I mentioned that we have a pretty robust credit-reporting industry in the United States, there’s about 20 percent of the U.S. population that either don’t have credit files at the major U.S. credit bureaus or have credit files that are so small that they can’t get a credit score. And so the CFPB put out a report called the "Credit Invisibles" that ran these numbers. So that’s a full 20 percent of the U.S. population that doesn’t have access to like a FICO score or a Vantage score or some kind of partial credit score, which makes it very hard for those people to access credit. In low-income neighborhoods, almost fully half of individuals living in low-income neighborhoods are unscorable. And of course blacks and Latinos are far more likely than whites or Asians to have credit files that won’t give them a score. So that all forms this foundation—and a need—to say, well, how do we get people access to credit that are just invisible to the system?
For that reason, I don’t think that anyone who is interested in financial justice should just immediately, out-of-hand dismiss something that sounds new or different. However it seems pretty unlikely to me that a scoring system rooted in the community you’re a member of is going to be helpful to the people we’re trying to help. It seems pretty unlikely that someone who doesn’t have a credit score or doesn’t have a credit file and is poor is going to helped if they have friends who are also in similar circumstances. So I have some skepticism about using your friends as a proxy for your creditworthiness to solve that problem. I think the nearer term solutions are things like, hey, most people pay a cellphone bill every month, most people pay a rental bill every month, most people pay for plumbing and power every month—is their a way to capture those regular repayment behaviors which are not for the most part currently included in a credit file? I think that’s where most of the near-term progress will be, not in kind of associational reporting systems.
And the other less nitty-gritty piece of this is that I think people are very used to the idea of a credit score based on their personal credit history. I think people are very used to the idea that, ‘I paid this loan on-time, I was a little late on that loan, so my credit score is X.’ I think the idea that you’re gonna be judged based on the behavior of those around you is going to feel really weird to people—and probably feel really weird to regulators, the first time that happens. That’s not to say that it’s de facto illegal or anything like that. I just think that, if that were to come into practice, that would come under scrutiny just because it feels weird. And I think in many cases feels kind of like the redlining practices—it feels a little like the practices that say you’re gonna get this benefit or not get this benefit based on the community of people you communicate with. Which I don’t think feels very good.
Meyer: It makes something explicit which is thoroughly implicit—an effect which is real though thoroughly implicit. Though that’s probably a talk for another time. Something else here is that we’re talking about that this is the fourth example in a patent which is really not about this. It’s two or three pages of text, and one paragraph touches on the credit-scoring part.
Rieke: And it fits really oddly in this patent, too, honestly. Like the patent as a whole is quote-unquote an invention about how to authorize users based on a social network. Most of the patent is thinking about authorization and authentication and spam prevention. So this idea of screening people out of a loan-application process on the one hand fits, in that it’s kind of the same shape as these other things, but on the other hand, it’s a way more serious decision-making context. So, yeah, it feels a little weird and… I mean, it’s in there. And it’s very clear. But it’s not like, we found Facebook’s secret patent on credit scoring.
Meyer: Could there be a technical resolution to disparate impact where Facebook—I’m trying to figure out how this would work as I propose it—where it says, this person belongs to this ethnic group, they live in this neighborhood, and therefore we’re essentially going to handicap the score we hand off to the credit bureaus to specifically account for disparate impact—or does that then get into a whole other set of worms because how do you quantify that?
Rieke: Yeah. That’s a really hard question. I apologize because this is gonna get super, super, super into the weeds, and this probably doesn’t belong to a story that you want normal people to read about this, so there’s a regulation called Regulation B. So you have the Equal Credit Opportunity Act which is the statute that Congress passed. Regulation B implements a lot of the requirements of the Equal Credit Opportunity Act in more detail. And I’m pretty sure Regulation B prohibits any consideration of race in any scoring system, period. So I’m not even sure you could take race into account in a strategy like that. But I think a strategy like you proposed may honestly be part of the toolkit of how we deal with this type of thing. I don’t think that that’s a solution that is gonna happen right away, but I can certainly imagine a future where, for these types of scoring systems, if you want to avoid disparate impact, you actually explicitly consider the factors that are important and as you say handicap them in a way that leads to different outcomes.
It’s actually really interesting—if you look back into the Congressional record, back to when Congress was passing ECOA, what you see is FICO arguing in front of Congress not to strip that data out of the decision. You see FICO saying, ‘if you want an outcome, just tell us the outcome that you want, and we’ll help you get there. Don’t take away the data. That’s not the way to do this.’
But that’s kind of far down the rabbit hole. If I were to make a prediction, I’d say 20 years down the road, we may be at a different place in how that data is actually used.
Meyer: Could I ask that you do get into the prediction game, because it sounds like the immediate outcome of this patent and technologies like it is that it would be very legally difficult for Facebook—and commercially difficult, and not really in their interest—to get into this game right now. And that the next steps for financial justice would be to consider more timely, regular payments that people are making. But when you look at 15, 20, 25 years, is this the kind of thing that you see play into people’s financial lives? And if so, does it have the possibility of being something stranger than just another consideration banks take into account when they look at mortgages and credit cards and student loans? That’s probably a larger and broader question than you feel comfortable answering.
Rieke: The question being: Is a broader array of data, such as the things we post on Facebook, will that go into the hopper of data that goes into a model to make these decisions in the future? I wouldn’t bet against it. In the long term, it’s hard for me to imagine a future in which we’re not using more types of data to make more types of decisions. I think right now the public imagination is a little bit ahead of where we are in the real world. When you talk to the quantitative-model builders at FICO and VantageScore, they’re very, very clear that your repayment-history information is far and away the most predictive thing they can find, to predict how you’re gonna do with future bill payments and how you’re gonna handle future credit. And this other stuff, even when they have a really good data set, like a perfect data set of your shopping history, it’s just not nearly as good to predict this behavior.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.