Google Street View images, similar to this one, were used by a computer-vision algorithm to predict income levels.Google Street View

Economists really like playing around with data, but traditionally, their explorations have been limited by the type of data that’s out there—usually, aggregate statistics or self-reported surveys. But now, the digitization of nearly everything is unlocking a trove of new data for them to poke and prod. One team of researchers, for instance, has looked at how cellphone data can be used to estimate broad economic changes in real time.

What else can be done with all the new data that’s out there? A new National Bureau of Economic Research paper looks at how images from Google Street View might offer a surprisingly accurate means of predicting household income in urban areas. In their paper, Ed Glaeser, Scott Kominers, Michael Luca, and Nikhil Naik explain how they trained a computer to spot patterns in New York and Boston city blocks, so that it could guess the pictured households’ income more accurately than if it had extrapolated from data on residents’ education or race.

First, the researchers trained the computer on what it was looking for. “We obtain a set of image features based on the textures, colors, and shapes present in the images,” explains Naik, a Ph.D. student at MIT’s Media Lab. Then, he said, they use a standard machine-learning algorithm to map the relationship between visual elements and income. After that, the computer was fed Street View images it hadn’t yet seen—12,200 of New York City and 3,600 of Boston—and generated its income predictions for each one.

Those predictions ended up matching the incomes reported in the Census’s American Community Survey quite well. “The computer-vision algorithm explains 77 percent of the variation in income at the block-group level, while race and education combined predict only 25 percent, which means that the visual appearance of street blocks is able to predict income better than education and race,” said Naik. He says he’s not sure exactly what patterns the program was recognizing in the images, but some of his previous work suggested that the type of buildings in the picture mattered most.

While that’s promising, Naik notes that it’s not necessarily the case that this method will work as well in other cities, which might have different underlying patterns that a computer could pick up on (or none at all). Still, he considers it a proof of concept, and wonders if Street View images could be used to understand patterns of poverty and wealth around the world. One obstacle, of course, is that neat city blocks probably make for better data than the scattering of houses that characterizes many more-rural areas. So Street View images of slums might be a lot harder for computers to parse.

We want to hear what you think about this article. Submit a letter to the editor or write to