Guy Williams and his fellow visual-effects artists have spent so much time staring at Will Smith’s face, they’ve practically memorized his every pore.
“We joke sometimes that we probably know his face better than his wife does,” Williams told me in September, laughing. “I can tell you exactly how he forms a smile. I can even tell you the 12 different flavors of Will Smith’s smile and the subtleties of each one. It gets pretty obnoxious.”
Becoming this intimate with the actor’s visage was an occupational hazard for Williams, a visual-effects supervisor for Gemini Man, Ang Lee’s sci-fi thriller about a retiring hitman battling his younger clone. Williams and his team at the digital-effects studio Weta were tasked with helping to create a version of Smith in his early 20s that could believably interact with his 51-year-old self on-screen. Together with another visual-effects supervisor, Bill Westenhofer, Williams and hundreds of artists tracked the actor’s movements on set and studied his previous work to build Junior, a digital human who resembles Bad Boys–era Smith. Westenhofer told me to think of Junior as Gemini Man’s version of Gollum from Lord of the Rings, the type of CGI creature molded by digital artists but rooted in an actor’s work. “The essence of what you see is what Will brought to the set,” he explained. “We created the digital person, but the choices [Junior] made were Will’s.”
Still, that’s an oversimplification of the process, which took about 500 artists and two years to perfect. When it comes to de-aging, visual-effects artists aim not to re-create or copy the image of an actor’s younger self, but to interpret the character being played. In Smith’s case, his clone in Gemini Man was trained as an assassin, so he couldn’t have the lanky build of Smith from The Fresh Prince of Bel-Air. On top of that, effects of this caliber require compromise: As much as dark lighting, long takes, and clever costuming help hide flaws in Junior’s presentation, the film couldn’t conceivably be told under such circumstances in every scene.
Lee admitted that the end result isn’t perfect—the final, daylight-drenched scene looked “goofy,” he told me—but the film operated “almost like a guinea pig” for the cutting-edge technology he wanted to implement. As a director, Lee often pushes filmmaking boundaries, so the idea of furthering de-aging by building a full digital human sounded appealing. “We are in a digital era,” he explained, “so to me it’s only logical to [de-age an actor] right in front of your eyes through digital effort.”
Indeed, de-aging actors digitally is becoming the new normal in Hollywood. Though the practice of manipulating an actor’s look rather than casting age-appropriate performers has been around since the mid-2000s—The Curious Case of Benjamin Button being a prime example—visual-effects artists have worked overtime on it this year. In March, the ’90s-set Captain Marvel toyed with Samuel L. Jackson’s appearance, erasing decades off his face. Marvel deployed the technique again in April, incorporating an estimated 200 aging and de-aging shots of various actors throughout Avengers: Endgame. Five months later, the horror sequel It Chapter Two de-aged its young cast members so that they would match their preteen looks from the first film. (Thanks, puberty!) And Martin Scorsese’s gangster epic The Irishman, released at the end of November, dialed back the ages of its stars, Robert De Niro, Al Pacino, and Joe Pesci—all in their 70s—to portray the lives of mobsters across entire lifetimes.
At a time when Hollywood’s population of box-office-busting movie stars is dwindling, de-aging allows existing ones to be reborn—or, at the very least, to ensure their longevity. For an industry that relies on rebooting franchises, it’s only logical that filmmakers would want to do the same to its bold-faced names. Consider the news that James Dean could be digitally resurrected to star in a new movie; taken to its bleakest, most Black Mirror–esque extreme, the notion of re-creating deceased actors via visual-effects has implications that could pave the way for a new era of moviemaking.
“It’s a form of immortality, if you think about it,” Olcun Tan, a visual-effects supervisor based in Los Angeles, told me. He pointed to Mickey Mouse as the optimal version of a “movie star” with staying power, a type of fictional character turned brand. To achieve everlasting fame, stars would go through the reverse, from being a household name to becoming a digitally reusable character, another tool in a filmmaker’s toolbox. “I’m not saying this is what the future will bring, because this is a little dark, but if you can imagine it, there is a likelihood it can happen,” Tan said. “Because if the film industry is trying to reverse current actor’s ages because it makes them money, you have to consider there is a likelihood they’ll license their appearance at some point, even after those people are gone.”
Tan’s familiar with what’s possible; his studio, Gradient Effects, regularly works on aging and de-aging shots. Most recently, it de-aged the actor John Goodman on The Righteous Gemstones—the HBO black comedy about a dysfunctional family running a megachurch—for an episode told through flashbacks about Goodman’s character. Tan and his fellow artists use an artificial-intelligence-assisted tool developed in-house called Shapeshifter, which can map an actor’s facial movements. “You can ask it to focus on a certain section [of a performer’s face] and then track stretching or deforming geometry, and it will create a 3-D version of that,” Tan said, as he demonstrated how the tool works, inside Gradient’s offices in August. “From there, we become able to manipulate it any way we want.”
This isn’t the typical method for de-aging an actor’s appearance. According to Westenhofer, the method used in Marvel films—in which makeup and lighting are used to help a performer look as youthful as possible, while markers are placed on their face and body to capture their movements—is ordinarily accepted as the industry standard. In such films, during postproduction, visual-effects artists erase the markers and apply patches of “skin” onto the photographed performance, calibrating them frame by frame. (Think of it as using an expensive, hyperrealistic Snapchat filter that can be adjusted and tweaked.)
But depending on the project, this method isn’t always applicable. Because Lee shot Gemini Man at a high frame rate, which would amplify any inconsistencies in its crisp, blur-free clarity, the visual-effects teams couldn’t depend on makeup and, more important, had to repeat every shot until Junior escaped the “uncanny valley.” On The Irishman, the visual-effects studio Industrial Light & Magic, the house behind filmmaking wizardry in franchises such as Star Wars, had to eliminate tracking markers entirely because the film’s stars preferred acting without accessories such as dots and skin-tight bodysuits covered in ping-pong balls. Instead, they were touched up with infrared, reflective makeup, which helped a special camera rig scan their face. “For the actors to be on set acting with other actors is the best way to get the best performance,” the film’s visual-effects supervisor Pablo Helman told me when we spoke in November. “The actors are checking each other continuously, checking each other’s performance … That makes it look more believable and natural.”
None of these practices is perfect—yet. Building the right tools lengthens the postproduction schedule and bloats a film’s budget; The Irishman, at a reported cost of $173 million to $200 million, is among Netflix’s most expensive ventures. The visual-effects artists also play a constant guessing game with their end results. Everyone, it seemed to Lee, had differing opinions on how young Smith should look in Gemini Man; they knew how he used to appear in the ’90s, but they had conflicting reads on how closely Junior resembled their memories of Smith and the footage they consulted. It’s a lot easier to make a CGI creature—say, a dinosaur or an alien—look menacing enough to viewers than it is to calibrate facial expressions that match everyone’s impressions of an actual person. “You can have five heads of departments watching [footage] and they’ll all find different things [to change],” Lee said. “Some will say, ‘This is great,’ and I’ll say, ‘No, no, no.’ … It’s mind-boggling.”
In the end, though, de-aging relies on the performance. “If you take a picture of somebody, it’s just a picture,” Helman pointed out. “It’s about how that person arrives from a frown to a smile that makes him who that person is. Since you are working with very iconic actors, you want to make sure that the behavioral likeness is there, or else it takes you out of the movie.”
During my visit to the Gradient Effects offices, Tan pulled up two clips of 2018’s Halloween in which Jamie Lee Curtis’s character, Laurie Strode, faces her attacker. The first clip was from the film itself, showing present-day Curtis. The second, however, had been run through Shapeshifter, and the result was the same clip but with the younger Laurie—wrinkles smoothed, face altered. The effect looked seamless.
Seamless to the untrained eye, anyway. To me, Shapeshifter’s work on a few frames from Halloween looked similar to any deepfake—the AI-fueled videos in which a person’s face can be stitched onto someone else’s—I’d seen floating around the internet. Yet there are crucial differences between those deepfakes and creative visual effects: artistry and intention. Though a machine can be taught to make the same alterations that human technicians do, the visual-effects community directly shapes the end result. “The work that’s done with traditional de-aging is a very tedious thing, where you use existing imagery and you modify it using idealistic imagery. It takes a lot of artistic effort,” Williams explained. When it comes to deepfakes, however, “you basically are training [a computer],” he said. “You don’t have total control over the situation … A good analogy in visual-effects is animation versus simulation.”
More important, AI technology still can’t come close to eliminating the line between fiction and reality with people’s faces. The uncanny valley, Westenhofer noted, remains a colossal challenge, and exists only because humans know human faces best. “Pulling off something that is completely real to the eye is orders of magnitude harder than what the deepfake software is able to do,” Westenhofer told me. “I look at the deepfake stuff, and it’s cute, but it looks totally fake to me … All of us, even people who know nothing about visual effects, have evolved over 2 million years to become experts, particularly in the human face, you know?”
All the same, as digital manipulation becomes easier to accomplish—free apps such as FaceApp can de-age anyone, albeit crudely—it becomes harder to tell what’s been altered. Technological improvements help visual-effects artists immerse audiences further into a story, but the mainstream tools threaten to turn the public into an easy mark for hoaxes. “I think we’re toying with things that we don’t have the maturity as a society to understand properly,” Williams said, adding that faked videos prove especially dangerous online. “I don’t think we understand the depth of what we’re dealing with when we’re dealing with the internet.”
A more pressing issue for Hollywood, though, involves the question of ownership for performers and their de-aged, or even resurrected, selves. Junior, the younger version of Smith built for Gemini Man, “physically sits on a series of hard drives in Weta right now as a bunch of numbers and files and different programs,” Westenhofer explained. Technically, Junior could appear on-screen again, depending on who gets to clear his use, but the question of whether it’s more important to be the one who determines how Junior’s used or to be the owner of the ones and zeroes that make up Junior himself has yet to be answered. “We do have to discuss actors’ rights and how they deal with their data … I know Will has the rights for [Junior’s] use, but the actual data itself?” Westenhofer said. “I don’t know where that lies.”
For now, Junior represents the first step in the next era of de-aging’s continuing evolution. “I think we’re all pushing for the digital human to be completely seamless,” Helman said, but as with any technique in the field, he added, its use has to be for an artistic reason. And though late actors have been revived on-screen—Rogue One re-creating the late Peter Cushing*, for instance—Helman urged caution in using the technique, especially if it’s irrelevant to the project’s creative intent. “Something that happens all the time in visual effects is, [you have to ask the question:] Because you can do it, should you do it?” he said. In The Irishman’s case, the story was about ruminating over one’s lifetime, so de-aging made sense. In Gemini Man’s, having Smith fight anyone who didn’t resemble himself so closely would have ruined the disturbing nature of the premise.
And besides, Helman continued, if you have a cast of heavyweights, you should aim to preserve their talent. Deepfake technology could be used to stitch a younger De Niro’s face onto anyone else’s, but “no computer will come in and act like Robert De Niro,” he said. Williams agreed: “We could make a digital version of every living actor and have a library that you’d call upon, but you’re not going to get their performance.”
If anything, familiarizing themselves so closely with the subjects they’re de-aging made these visual-effects artists appreciate the actors’ work even more. “I can tell you, [De Niro] has an incredible command of moving his eyebrows in completely opposite directions, left and right at the same time,” Helman said. “That was something he wasn’t aware of! I talked to Bob and said, ‘When you do this and that, do you actually work at it? Do you look at yourself in the mirror?’”
De Niro had a nonchalant, un-actorly answer. “He said, ‘No, no, no,’” Helman recalled. “‘I just do it.’” Try replicating that on a computer.
*This article originally misstated that Carrie Fisher’s likeness had been recreated on-screen in Rogue One. It was Peter Cushing’s.