Elizabeth Pisani explains (pdf) why large amounts of data collected by organizations like Google and Facebook could change science for the better, and how it already has. Here she recounts the work of John Graunt from the 17th century:

Graunt collected mortality rolls and other parish records and, in effect, threw them at the wall, looking for patterns in births, deaths, weather and commerce. ... He scraped parish rolls for insights in the same way as today’s data miners transmute the dross of our Twitter feeds into gold for marketing departments. Graunt made observations on everything from polygamy to traffic congestion in London, concluding: “That the old Streets are unfit for the present frequency of Coaches… That the opinions of Plagues accompanying the Entrance of Kings, is false and seditious; That London, the Metropolis of England, is perhaps a Head too big for the Body, and possibly too strong.”

She concludes:

A big advantage of Big Data research is that algorithms, scraping, mining and mashing are usually low cost, once you’ve paid the nerds’ salaries. And the data itself is often droppings produced by an existing activity. “You may as well just let the boffins go at it. They’re not going to hurt anyone, and they may just come up with something useful,” said [Joe] Cain.

We still measure impact and dole out funding on the basis of papers published in peerreviewed journals. It’s a system which works well for thought-bubble experiments but is ill-suited to the Big Data world. We need new ways of sorting the wheat from the chaff, and of rewarding collaborative, speculative science.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.