The web lets us read lots of articles from lots of publications. Scientists do the same thing now, too.
We don't read whole newspapers or magazines anymore. We hear about what our friends are reading, or we search the web for what we're interested in.
Turns out that scientists are doing exactly the same thing.
Universities create and produce an enormous amount of knowledge every year, mostly in the form of papers, which gets bound into expensive journals. But universities also have limited funds, so their librarians must make decisions about which journals to buy.
To help them make purchases, Thomson Reuters assigns each journal an "Impact Factor," an enigmatic index supposedly based on how often papers within journals cite each other. The higher a journal is cited, the higher its Impact Factor (again, supposedly) rises -- and the more often it's purchased. And since, for most of the twentieth century, journals were just that -- physical books -- the more copies that were purchased, the wider the papers inside were distributed, and the more each was cited.
A recent study, conducted by George Lozano, Vincent Laviere and Yves Gingras, reinforces that. From 1902 to the late 1990s, the relationship between a journal's Impact Factor and how often it was cited increased. Ignore the equations hanging around this chart and just look at the math: among natural and medical science papers, the line that represents how closely a journal's Impact Factor and how often its papers were cited increases.
And then they start using the web. In the late 1990s -- as searching electronic systems replaced reading physical journals -- the rate plummets. Write the authors:
Now that scientific information is disseminated electronically, researchers are less likely to read entire journals; instead they conduct electronic literature on particular topics and find specific articles from a wide variety of journals.
This effect appears at smaller scales, too. Physics, as a discipline, embraced digital journals earlier than the rest of science: in the early 1990s. And it appears on the graph:
This whole research reinforces the enigma of the Impact Factor, which -- like so much else in academic publishing -- is assessed by a corporation, apart from the open rigor of academia. (As of late 2007, no one had managed to reproduce the Impact Factor independently.) Perhaps scientists will move away from it -- and that could mean that institutions, papers and individual scientists might no longer be judged on the hazy Impact of their work. Snort the study's authors:
This should force a return to direct assessments of paper quality, by actually reading them.