When Welfare Decisions Are Left to Algorithms

The political scientist Virginia Eubanks worries that technology is providing “the emotional distance that’s necessary to make what are inhuman decisions.”

Homeless people sleeping in tents under streetlamps with the LA skyline beyond.
Homeless people sleep in the Skid Row area of downtown Los Angeles. (Jae C. Hong / AP)

Fifty years ago next month, Dr. Martin Luther King Jr. spoke at the National Cathedral in Washington, D.C., at what turned out to be his last Sunday sermon. He talked about the perils and promises of a “triple revolution,” as he called it, consisting of automation, the emergence of nuclear weaponry, and the global fight for human rights. Regarding that first prong, he noted at the time, “Through our scientific and technological genius, we have made of this world a neighborhood and yet we have not had the ethical commitment to make of it a brotherhood.”

It’s this speech that Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, comes back to at the end of her new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. In it, Eubanks studies some of the seemingly neutral—and even well-meaning—technologies that promise to streamline the U.S. welfare apparatus. Automated systems that gauge eligibility for Medicaid and food stamps, databases that match homeless people to resources, and statistical tools that detect cases of child abuse all hold the potential to revolutionize welfare programs. Eubanks examines these technologies, detailing the ways they can sometimes compromise the rights of the very people they supposedly help.

I recently spoke with Eubanks to about some of the main themes in her book. The conversation that follows has been edited for length and clarity.

Tanvi Misra: In your book, you lay out the troublesome history of poverty-management systems: from hellish “poorhouses” to the scientific charity movement, the New Deal welfare apparatus to the automation of welfare. What is the common thread?

Virginia Eubanks: Often when we talk about new technologies, we talk about them as “disruptors”—things that shake up the system that we're in right now. One of my big arguments in the book is that the tools that I’m talking about are more evolution than revolution. So that history really, really matters.

Why I start the book with a brick-and-mortar poorhouse is because it was the most innovative poverty-regulation system of its time, in the 1800s. It rose out of a huge economic catastrophe—the 1819 depression—and the social movements organized by folks to protect themselves and their families. What’s really important about the poorhouse—and this is the thread that goes throughout all of the things I talk about in the book—is that it was based on this distinction between what at the time were called the “impotent” and the “able” poor. The “impotent” poor were folks who, by reason of physical disability or age or infirmity, just couldn't work. The “able” poor were those folks who moral regulators at the time believed were probably able to work, but might just be shirking.

That distinction between the impotent and the able poor, which today we would talk about as “deserving” and “undeserving” poor, created a public-assistance system that was more of a moral thermometer than a floor that was under everybody protecting their basic human rights.

I think of that as the deep social programming of all of the administrative public-assistance systems that serve poor working-class communities. That social programming often shows up in invisible assumptions that drive the kind of automation of inequality that I talk about in the book.

Misra: You write about the “digital poorhouse” as an extension of these previous systems. What is that and when did it originate?

Eubanks: One of the most important historical moments that I talk about in the book is the rise of what I call the “digital poorhouse,” which is really the shift from quite sophisticated but analog systems of control to digital and integrated systems of control.

When I first started doing this book, I actually began in the New York State archives, looking for the technical documents of when the poverty-management system started to be computerized. I had just assumed that that would have happened in the 1980s with the widespread uptake of personal computers, or in the 1990s when the actual policy change happened around welfare reform, which required that local welfare offices computerize some of their processes.

But actually, where I found the move to computers in the administration of public assistance happened was in the late 1960s and early 1970s. That was really surprising to me. What I learned was that right at that moment, there was a very successful national welfare-rights movement that was challenging discriminatory eligibility rules in public-assistance programs. It was succeeding in opening up the welfare rolls to those folks who have been unjustly barred in the past, especially women of color and never-married mothers. As a result, the rolls expanded very quickly. Though it’s important to understand that public assistance has never reached even as many as 50 percent of people under the poverty line, right around 1970, it got close to 50 percent. Four-fifths of children living under the poverty line were receiving public assistance of some kind. At the same time, there’s a backlash against the Civil Rights movement, especially black power, and there’s a recession.

That is the moment that the “digital poorhouse” arises, that these new technologies come into play. And if you look at the size of the rolls, they basically start to drop off right at that moment and continue in a downward trajectory until today—with less than 10 percent of people under the poverty line receiving cash assistance.

Misra: One of the case studies in your book was of the “coordinated entry system” in L.A., which started in 2013. It’s based on the housing-first approach, which first aims to get a roof over people’s heads, and then helps them in other ways. The coordinated-entry system itself consists of a survey, which gathers information about people, and plugs it into a database. Then, an algorithm ranks the cases on a “vulnerability index” so that the most vulnerable ones can be helped first.

That seems pretty positive at first glance.

Eubanks: The housing-first approach is clearly a really positive approach to the housing crisis. And I think there’s a definite argument to be made for prioritization. There are 58,000 unhoused people in Los Angeles County alone, and there are not currently enough housing resources for everyone. So, I understand the impulse.

But one of the things that I did in this book that might be a little different is that I started from the point of view of unhoused folks themselves, who are the targets of this system. What really stood out to me from their stories is the difficult choices they have to make in how they interact with the system. Because that survey I talked about? It asks deeply private or even incriminating questions about personal behavior. It asks if you are having sex without protection, if you’re trading sex for money or drugs, if you’re thinking of harming yourself or others, if you’re running drugs for someone else, if there’s an open warrant on you. And if you answer “yes” to these questions, you potentially get a higher score on the vulnerability index, which prioritizes you for housing.

Under existing federal data standards, the information that’s stored in this Homeless Management Information System database can be accessed by law enforcement on the basis of only an oral request. So you don’t need a warrant—you don’t even need a written request. So to many people I spoke to, it was unclear where the line was between this system and the criminal-justice system.

I want to be really fair; there were definitely some people who said, “Coordinated entry was a gift from God. It is the best thing that ever happened to me because it helped me get housed.” I will also say that even the people who had success with it had moments of reflections about it: “It’s strange that I should get housed when so many other people I know who are going through similar things to me didn’t get housed. That doesn’t seem right.”

But for the folks who haven't had success being housed, folks like Gary Boatwright, this idea that the unhoused community was being assessed on a spectrum of deservingness for housing really, deeply troubled them. Gary was 64 at the time I spoke to him. He had been unhoused and living on the street for almost 10 years off and on. He said to me, “This is just another way of kicking the can down the road.” The problem is not scoring people—the problem is really that there’s just not enough housing for the 58,000 people in Los Angeles.

And exactly what people were afraid of really happened to Gary. It wasn’t directly attributable to the coordinated-entry system but he was on the street long enough that sort of everyday behaviors of being unhoused are often treated as crimes—sleeping on the sidewalk, leaving your stuff on the sidewalk, public urination—leaves people open to criminalization. As far as I could tell, he got really upset one day around public transportation and was arrested for attacking a bus. He spent close to nine months in jail. He is out now, and doing well.

Misra: So, it’s similar to the the argument Khiara M. Bridges makes in her book about privacy rights and mothers on welfare: That it isn’t that folks choose to exchange their privacy for a benefit, but that they don’t really have a meaningful choice.

Eubanks: Yes, and this question of consent is important here. In Los Angeles, the folks who are given this survey do sign an extensive informed-consent document. But, it seems to me that you are stretching the boundaries of informed consent if access to a basic human needs like housing is in any way contingent on you filling out this form.

Part of the consent form says that the information gets shared with a lot of other agencies and to know more about that you have to request another form. Folks who go through the process of requesting the second form get a list of 168 agencies this information is shared with. You can ask to be expunged from the database, but the process by which you do so is really unclear—and some of your information stays in the database. The consent lasts for seven years and you have to actively stop it—by writing in and saying, “I withdraw my consent.”

So, it’s legitimate that people can have fears about how that information is being used and shared.

Misra: After writing this book, what conclusions have you come to about the way the U.S. addresses inequality?

Eubanks: One issue is the conversation that’s happening at this moment around inequality in this country—not just economic inequality but inequality writ large. What I want people to take from this book is that though we often talk about these systems as disruptors or as equalizers, at least in the cases that I research, they really act more like intensifiers or amplifiers of the system we already have.

One of the things I most fear about these systems is they allow us the emotional distance that’s necessary to make what are inhuman decisions. I do not want to be the caseworker looking at the 58,000 people in Los Angeles and having just a handful of resources and deciding who gets them. That is an incredibly difficult decision to make. My fear is that sometimes these systems act as empathy overrides—that we are allowing these machines to make decisions that are too difficult for us to make as human beings. That’s something that we really need to pay attention to because in the long run that means that we're giving up on the shared goal of caring for each other.

This post appears courtesy of CityLab.