Donald Trump, improbably, recovered from the Access Hollywood tape, in which he bragged about sexually assaulting women, but that tape aroused the public’s passions and conscience like nothing else in the 2016 presidential race. Video has likewise provided the proximate trigger for many other recent social conflagrations. It took extended surveillance footage of the NFL running back Ray Rice dragging his unconscious wife from a hotel elevator to elicit a meaningful response to domestic violence from the league, despite a long history of abuse by players. Then there was the 2016 killing of Philando Castile by a Minnesota police officer, streamed to Facebook by his girlfriend. All the reports in the world, no matter the overwhelming statistics and shattering anecdotes, had failed to provoke outrage over police brutality. But the terrifying broadcast of his animalistic demise in his Oldsmobile rumbled the public and led politicians, and even a few hard-line conservative commentators, to finally acknowledge the sort of abuse they had long neglected.
That all takes us to the nub of the problem. It’s natural to trust one’s own senses, to believe what one sees—a hardwired tendency that the coming age of manipulated video will exploit. Consider recent flash points in what the University of Michigan’s Aviv Ovadya calls the “infopocalypse”—and imagine just how much worse they would have been with manipulated video. Take Pizzagate, and then add concocted footage of John Podesta leering at a child, or worse. Falsehoods will suddenly acquire a whole new, explosive emotional intensity.
But the problem isn’t just the proliferation of falsehoods. Fabricated videos will create new and understandable suspicions about everything we watch. Politicians and publicists will exploit those doubts. When captured in a moment of wrongdoing, a culprit will simply declare the visual evidence a malicious concoction. The president, reportedly, has already pioneered this tactic: Even though he initially conceded the authenticity of the Access Hollywood video, he now privately casts doubt on whether the voice on the tape is his own.
In other words, manipulated video will ultimately destroy faith in our strongest remaining tether to the idea of common reality. As Ian Goodfellow, a scientist at Google, told MIT Technology Review, “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened.”
The collapse of reality isn’t an unintended consequence of artificial intelligence. It’s long been an objective—or at least a dalliance—of some of technology’s most storied architects. In many ways, Silicon Valley’s narrative begins in the early 1960s with the International Foundation for Advanced Study, not far from the legendary engineering labs clumped around Stanford. The foundation specialized in experiments with LSD. Some of the techies working in the neighborhood couldn’t resist taking a mind-bending trip themselves, undoubtedly in the name of science. These developers wanted to create machines that could transform consciousness in much the same way that drugs did. Computers would also rip a hole in reality, leading humanity away from the quotidian, gray-flannel banality of Leave It to Beaver America and toward a far groovier, more holistic state of mind. Steve Jobs described LSD as “one of the two or three most important” experiences of his life.