Like other cultural industries, publishing is founded on hits. Yet the business of predicting best sellers remains an enigmatic art—the province chiefly of gut instinct and educated guess. Sometimes these faculties serve the industry well; other times not so much, especially when it comes to first-time authors. J. K. Rowling and John Grisham endured serial rejection before landing the deals that brought their work to the masses. E. L. James’ Fifty Shades of Grey found a traditional publisher only after it had been self-published.
A computer algorithm able to identify best-selling texts with at least 80 percent success sounds like science fiction. But the “bestseller-ometer”—the subject of an upcoming tome The Bestseller Code: Anatomy of the Blockbuster Novel, by Jodie Archer, ex-research lead on literature at Apple, and Matthew L. Jockers, an associate professor of English at the University of Nebraska-Lincoln—is emphatically non-fictional. The algorithm’s claimed efficacy is based on a track record “predicting” New York Times best sellers when applied retrospectively to novels from the past 30 years.
Several years in the offing and the product of the processing power of thousands of computers, the bestseller-ometer represents an attempt to identify the characteristics of best-selling fiction at scale by interrogating a massive body of literature (20,000-plus novels). By seeking to put the traits that set it apart from lesser-selling work on something approaching a scientific footing, the project provides a data-driven check to received wisdom about the “secrets” behind top-selling fiction. It also presages a possible future where publishers turn to technology to help cut through the vagaries of picking prospective best sellers.