In 2010, industry publications reported that production on Battleship, a new blockbuster starring A-list actors including Liam Neeson and Alexander Skarsgård, had begun. The punch line, of course, was that the film was “loosely based” on the popular Hasbro board game of the same name. With this news, the blogosphere exploded with incredulity over commercial cinema’s seeming inability to come up with original ideas for motion pictures. Cinema Blend’s Eric Eisenberg lamented, “More than just being about the quality of Battleship, a big part of the reason people were turned off of the movie was because the idea of a film based on a board game is ridiculous. I understand that Hasbro loved all of the money they made from the Transformers movies, but seriously, we don’t need a Hungry, Hungry Hippos adaptation.”
Although Battleship went on to receive fairly positive reviews (many critics expressed surprise over how “not bad” the film was), the film did poorly at the box office, making only $300 million back on its $200 million investment. Nevertheless, shortly after its release, it was announced that several other board games would receive the big screen treatment, including Monopoly, Action Man, and yes, even Hungry Hungry Hippos. It isn’t strictly accurate to call these films “adaptations” of board games—Hungry Hungry Hippos and Battleship are brands, names, and images, but they’re not narratives or characters. And this seems to be precisely Eisenberg’s point: The problem with contemporary commercial cinema (epitomized by Hollywood) is that it’s more concerned with making money than it is with telling quality stories or creating indelible characters.
As demonstrated by the strong negative reaction to Battleship, commercial cinema’s attachment to the processes of repetition, replication, sequelization, and rebooting—to films that appear in multiplicities—is generally understood in a negative light in the popular press. With the announcement of every new sequel or reboot appears a response decrying the loss of creative innovation and the steady decay of the cinema. In a 2012 story in Vulture, Claude Brodesser-Akner describes the critical reaction to Universal’s multimillion-dollar deal with Hasbro toys this way: “The reaction from many in the creative community was scorn, followed by resignation: Had it come to this? A latex rubber doll filled with gelled corn syrup was now what passed for intellectual property?” There’s a generalized sense that commercial cinema is losing its ability to come up with new ideas and, in its drive for profits, is finally scraping the bottom of the story-property barrel.
But most of these reactions confuse the need to make action-heavy, dialogue-light tentpole films that do well internationally with the drive to make films out of known story properties, something now called “preawareness.” These are two different production strategies that just happen to work well together. Hollywood’s dysfunctional love affair with blockbusters—movies that can potentially make or break a studio—is a relatively modern phenomenon.
Throughout the 1960s, the major U.S. studios lost money at the box office for several interrelated reasons, including the rising popularity and widespread availability of television, “white flight” to the suburbs, and the slow dissolution of the studio system form of production.
To recoup their losses, studios began investing production money in fewer, very expensive “event” pictures. This new mode of production proved problematic when these pictures failed to recover their production costs at the box office. Famous flops like Cleopatra (1963), Star! (1968), and Hello Dolly! (1969) put major studios such as 20th Century Fox, Warner Bros., and United Artists on the brink of financial ruin. After a brief flirtation with making films for urban African American audiences, leading to the creation of the blaxploitation cycle of the 1970s, studios discovered that fans would go to see movies like Jaws (1975) and Star Wars (1977) over and over, thus ushering in the era of the modern blockbuster.
Ironically, it was the directors of these first blockbusters, Steven Spielberg and George Lucas, who recently made headlines when they predicted disaster for the film industry. “There’s going to be an implosion where three or four or maybe even a half-dozen megabudget movies are going to go crashing into the ground, and that’s going to change the paradigm,” Spielberg reportedly said at the opening of a University of Southern California media center. In these formulations, multiplicities (or at least big-budget tentpole films, which are almost always part of a multiplicity) threaten not simply American cinema’s ability to be seen as art, but its very ability to exist. And so, as is often the case, history winds up repeating itself.
Although the U.S. blockbuster didn’t emerge until the 1970s, the practices of basing films on pop-cultural ephemera like popular board games and duplicating familiar story properties and characters have been common in filmmaking since its origins. In 1905, Thomas Edison released The Whole Dam Family and the Dam Dog, a film based on a popular souvenir postcard image. The movie’s appeal was predicated on its ability to create a moving, breathing version of a popular postcard series featuring a humorously named family. It isn’t a big leap from a successful film based on a souvenir postcard to a successful film based on a board game. The drive to exploit audience interests in comic strips, magic lantern shows, vaudeville, popular songs, and other films and then to replicate those successful formulas over and over until they cease to make money is foundational to the origins and success of filmmaking worldwide.
* * *
It isn’t just the film industry that relies on multiplicities to generate sure profits. Television also relies on the replication and repetition of successful formulas as a central part of its production strategies. Although this process of creative theft is central to capitalism itself, it is, as in the case of cinema, a generally denigrated process, as if the entry of capitalism somehow contradicts the possibility of art. Unoriginal art, in critical parlance, is an oxymoron.
In the 1950s, during the early days of television ownership, most American TV owners were upscale and urban. They were what we now call “first adopters,” and they had the money to invest in a new and untested form of home entertainment. The industry was based in New York, and for a variety of reasons, including sponsors’ ownership of time blocks, live televised theater was one of its dominant forms.
These teleplays featured adaptations of works by the nation’s most notable authors (Tennessee Williams, Ernest Hemingway, F. Scott Fitzgerald). Television playwrights, especially Rod Serling and Paddy Chayefsky, were national figures. This early programming was modernist in its insistence on the unique, isolated text, and hence was distinct from the forms of multiplicity—especially the situation comedy and the continuing dramatic series—that soon came to dominate. But the world of television changed as new forms of financing evolved and more Americans acquired sets. TV subsequently became understood as a lowbrow, commercial mass medium that could be experienced by anyone.
The scholars Michael Newman and Elana Levine argue that beginning in the 1970s and 1980s, when the concept of “quality television” increased the possibilities of targeting programming to desirable (read: affluent) audiences, television began to once again aspire to become a highbrow medium. Other technological changes, such as the practice of putting entire seasons on DVD, as well as the amount of serious writing that critics began to devote to their favorite shows, created the sense that American television was finally being appreciated as an art form. According to Levine and Newman, “Legitimation always works by selection and exclusion; TV becomes respectable through the elevation of one concept of the medium at the expense of the other.” Indeed, HBO’s famous ad campaign from the 1990s, “It’s not TV. It’s HBO,” is a good example of how television can be legitimated as art only if it’s distanced from the medium of television itself.
As the current demonization of reality-television stars reveals, we are inherently suspicious of the popular. Furthermore, the gatekeepers of the world of media appreciation, who are paid to dispense their good taste—film and television critics, but also media scholars—reify and limit the sphere of what constitutes good taste. So it makes sense that most of the texts that fit under the broad umbrella of multiplicities, such as film sequels and cycles, television remakes and spinoffs, are most frequently discussed as “guilty pleasures” (when they’re not dismissed outright as harbingers of the end times of cultural production).
Since television is a younger medium than film, the field of television studies is currently grappling with the same conversations that film-studies scholars were having in the 1970s. Only recently has television itself reached the stage where it’s able to “legitimatize” itself and make claims for its status as art. Some critics feel that the current golden age of television (which many date to the premiere of HBO’s The Sopranos in 1999), like the “golden age” of cinema in the 1970s, is in a state of decline due to its insistence on repetition. But as Vox’s Todd VanDerWerff noted in a 2013 essay for A.V. Club, “The dirty little secret here is that essentially every decade except the 1960s has been proclaimed the ‘golden age of TV’ at one time or another.”
In other words, critics and historians of television, much like their counterparts in cinema studies, are constantly searching for that ideal moment when the art form they love was considered to be at its purest, to be reaching its richest potential. But to dismiss movies, or TV shows, because they’re inspired by, or part of, a preexisting franchise or series, is to ignore the entire history of the moving image. Cinema has always been rooted in the idea of multiplicities—that is, in texts that consciously repeat and exploit images, narratives, or characters found in previous texts. Self-cannibalizing cycles and sequels, and even the practice of making films out of toys and board games, are filmmaking strategies dating back to the industry’s first decade, not a symptom of contemporary culture’s inability to create anything new.
This article has been adapted from Amanda Ann Klein and R. Barton Palmer’s book, Cycles, Sequels, Spinoffs, Remakes, and Reboots: Multiplicities in Film & Television.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.