In 1935, a film critic for the Cincinnati Times-Star named H.T. Jordan took issue with the depiction of a Swedish king in Fox Studios’ biopic Cardinal Richelieu. In a piece called “Bad History,” Jordan wrote that the portrayal of the 17th-century monarch Gustafus Adolphus dragged down what was otherwise a handsome production with a gorgeous setting:

It is realized that a writer of motion picture plots is permitted some freedom with historical facts, but when one of the greatest characters in human history is introduced wantonly and unnecessarily into the plot structure and in a manner derogatory and despicable, the occasion gives rise to protest.

Jordan’s critique dovetailed with a period-specific hullabaloo: The Swedish government protested the portrayal—the king, they said, was not so attached to the bottle, nor would he have sold himself and his cronies—and persuaded the MPPDA to censure Fox Studios, which eventually edited out the offending scenes. That sort of intervention hardly seems like it would happen today—yet Jordan’s words nevertheless feel eerily, uncannily familiar.

This past award season, nearly every film with any historic grounding has been the subject of many such critiques. Selma, The Imitation Game, Foxcatcher, Big Eyes, and Unbroken have all been pinned with a scarlet letter by dint of historical inaccuracies or omissions, and pundits, interested parties, and disinterested journalists alike are writing in to detail their factual faults. The frame of critique is getting a lot of attention, despite a history of publicly fact-checking docudrama films that goes back to Watergate and Vietnam, and that's been a regular feature of Oscar smear campaigns since 1999’s The Hurricane. Yet it’s never been a determining factor for which films win Oscars. In fact, many Best Picture winners—A Beautiful Mind, as well as Argo and The Hurt Locker—all suffered their own “controversies” before taking home the gold man.

What’s the point? The annual fact-checking cycle hasn’t yet persuaded Hollywood films to transfigure all their biopics and historical dramas into documentaries. Writers theorize that publicly disputing the period details may contribute to a film’s failure to pick up a win or even a nomination, but lack of screeners and late releases could easily also be a factor when we’re talking about surmising the motivations of the primarily old, male Academy members who vote. If anything, the sheer volume is drowning out when these fallacies matter and when they don’t. This isn’t to say that the idea of complicating one person’s version of history should stop—it's to say the practice should be challenged, made better, more helpful.

Some suggestions have already been offered. At The Washington Post Ann Hornaday posed new rules for watching biopics. According to her regulations, the audience should cultivate a “third eye” that would straddle the consideration of the facts with an appreciation for fiction. This is democratic—but also puts the onus entirely on the viewer. Fact-checking opinion pieces originate with viewers taking a film a tad too seriously, sure—but if universally poor movie-watching form was to blame, we might be besieged year-round, given the healthy number of stories based on real events that roll in, to less fanfare, year-round.

There have been some proposals for systematic reform, too. Last year a piece in USA Today suggested movie studios might benefit from “couching” their films in more fictional terms. That’s fair, but the quoted analyst’s solutions were limiting—he suggesting not using real names, only telling really old stories, and writing a happy ending.

But the problem in 2015 is with the Oscars, not the viewers or teary-eyed endings. It's the politicized campaigns that occupy late fall and mid-winter that bring out the mobs with history books—the practice came back into vogue, after all, in 2001, when a Miramax Oscar consultant sent reporters a blog post ripping into the perceived, scrubbed homosexuality of subject John Nash in A Beautiful Mind. As Jason Bailey so thoroughly pointed out in his explanation of Selma’s “controversy” in Flavorwire in December, respected news outlets now routinely time their searing editorials for late December and early January. In other words, when Academy members are receiving their first Oscar ballots, as the rest of the world is ringing in the New Year.

If the origins of fact-checking for fact-checking’s sake begins with the Oscars, then so too must the solution. The impulse to question a movie’s interpretation, in any case, is a good one—it forces us to evaluate history and review the ebb and flow in cultural values that makes an old story appeal to a new audience—but it’s not focused right now. What better call to arms for the awards ceremony that annually highlights achievements in the industry according to specific categories? This is a job for the Academy of Motion Picture Arts and Sciences, originally a public-relations organization, now best known for an annual awards ceremony with questionable hosting choices. So here’s a crazy proposal for their consideration: An Oscar category for Best Dramatic Research.

Imagine a category that rewards excellence in blending fact and fiction in in an original manner (this is distinct from Best Adapted Screenplay, which celebrates the adaptation of a previous, distinct work.) That nominates the little-known, little-championed group that collaborates to bring a lot of information together into a story—the screenwriters, historical consultants, technical advisers, scholars, and the director who work in collaboration, and in tension, on some productions to prepare a true story for the big screen. It emphasizes the overall quality of the synthesis of fact and fiction—how a film fares under Ann Hornaday’s “third eye.” This is a category that doesn't nitpick the details, but appreciates a film’s attempt to bring together the realms of fiction and nonfiction. As with all categories, the winning parties will be based on a process that is gloriously, frustratingly subjective.

Judging films on the quality of their docudrama would be insanely complicated; and that’s okay. In one respect, such a category would give a little more transparency. “For Your Consideration” campaigns would distinguish films that want to be scrutinized for docudrama from those that don’t. This would protect certain films from left-field critiques: The Master, which gave its characters all fictional names, probably shouldn’t be critiqued by its faithfulness to L. Ron Hubbard. It would also hold others accountable: The Imitation Game, with its “Honor the Man. Honor the Film” marketing campaign could probably use the extra scrutiny.

But in another respect, a category for dramatic research would complicate viewers’ notions of history—for the better. In recent months, the well-intentioned writers who champion the even-keeled, it's-both-fact-and-fiction approach to docudramas have occasionally put their discontent with gratuitous nitpicking in preachy, film theory-briefing terms about “how to watch a movie.” The Oscars accomplish more or less the same goal—but with an eye towards embracing, rather than censuring, the general public. When the Academy of Motion Picture Arts and Sciences formed in 1926, its participating studio heads were joining up to combat the industry’s bad reputation for scandal and divorce. Awards added some necessary panache—the movie stars, the televised broadcast that began with the second ceremony—and the inception of color and E! helped along the way.

Ever since, the ceremony's popularity has defined what it means to make film art for the industry and the public alike. The categories highlight what elements of the production to venerate, but in a way that's both digestible and didactic for the layman viewer, who may perhaps, like me, have to Google the difference between sound editing and sound design every year. Come for the dresses, stay for a primer in movies to name-drop and add to your Netflix queue in the upcoming months. The Academy has always self-consciously influenced the public conversation about taste, and used to be quite transparent about it, back when there was an award for "Best Unique and Artistic Quality of Production." (It was later deemed redundant alongside "Best Picture.")

Of course, it’s rare for categories to come and go at the Oscars. Since being founded in May 1927, the awards have added three hours and 12 categories. They’ve introduced and lost seven (RIP, Best Title Writing). To be sure, three of the seven—Best Unique and Artistic Quality of Production, Best Engineering Effects, Best Title Writing—were phased out immediately after the first Academy Awards in 1929, when the Academy was still defining what should be rewarded and what should be left out. Originally, too, the Shorts category was divided into three separate genres (Color, One-Reel, Two-Reel), which folded with changing technologies—the use of color and the consolidation of shorts into one reel. Best Dance Direction was instituted from 1935-1937 to keep up with all the musicals, as well as the second, short-lived category for Best Score (Comedy or Musical, then Adaptation or Treatment).

But the Academy’s proved to be pliant to the outcry of its audience over snubs. The Academy Juvenile Award was given on and off from 1934-1960 after Jackie Cooper, then nine years old, was perceived to have been robbed of Best Actor for Skippy (1931). Most recently, it expanded the Best Picture pool in 2009 from 10 categories from five to remedy the snub of The Dark Knight (it turned out to be a not-so-good idea: They changed the rules again in 2011 to make the numbers more flexible). It disbanded the ability of Music Branch Members to contact other music branch members in the wake of a dirty insider campaign for “Alone Is Not Alone.” Just this past year the Academy passed a rule to credit overlooked visual effects production designers and another to allow more producers to be considered for Best Picture (if they consider themselves a producing team). It’s not unthinkable that a perceived, fact-based smear like the one surrounding the lack of Oscar nominations for Selma could produce a new change-up to the rules.

Especially since the gray areas are only getting more significant. Consider the many genres that currently base, and market, their films on real research. Science fiction has taken the lead in the past few years. Shamed though it later was by an astrophysicist and a NASA Task Group Manager for the directions of space debris and the gravitational heft of Sandra Bullock’s hair, Gravity took its central idea from the Kessler Syndrome in consultation with award-winning scientist and NASA alum Kevin Graziere. Whatever you think of the Tesseract and the impossibility of traveling back in time, Interstellar was written with the help of theoretical physicist Kip Thorne, who helped the movie accurately represent a black hole and envision a tidally locked planet.

Most movies seek out consultants out of pure necessity—when they are period efforts or use a lot of old-timey ammunition, for instance. But some seek out accuracy where, perhaps, they don’t even have to: David Fincher, who has never before shown creative blockage when it comes to torture scenes, took pains to employ a consultant to determine the visual content of Gone Girl’s most grisly scene. There are now almost as many articles about movies that accurately predict the future (the creepily prescient Minority Report, by the way, employed 15 experts) as there are movies that get the past and present wrong. The upshot? There are more than enough movies to fill out the customary five slots.

Could it really happen? I’m not naïve enough to think so: In spite of the ideal that such an Oscar category would inspire more productive conversation, the reality is that the conversation would probably shift to the political, social, and ethical leanings of the secretive Academy. Besides, there are significant caveats: A Research Branch of the Academy would have to be convened consisting of members of the historical and technical consultancy and advisory community to vote in keeping with the ceremony’s tradition. The ceremony would have to devote three-and-a-half-minutes of its already overstuffed three-hours-plus-some ceremony to thank the team that made the feat of interweaving a movie’s narrative demands with historic grounding possible, give some screen time to the names of the men and women who did the nitty-gritty work in the archives. The Academy members involved would likely have to do some heftier reading than the 90-to 120-page scripts they usually cover.

But it’s important, anyway, for the Oscars—and the viewers whose outcry has the power to influence them—to think about adapting to the changing landscape of film criticism, at both an expert and a popular level. For years, publications have been seeking out the opinions of experts on the latest Hollywood movies. But just a month ago, Michelle Obama, Bradley Cooper, and a veteran’s advocacy group convened a certification program for accurate portrayals of U.S. veterans. These stamps of “authenticity,” whatever that means, may continue to affect the goings-on of an awards season dedicated to the quality of artistic projects. Winning an Oscar is, for better or worse, still the authoritative seal of approval. The Academy might want to start reimagining, and recalibrating, to keep it that way.