Our senses take in a vast array of sights and sounds every day, sparking activity in our neural circuits in corresponding patterns. Now, scientists are developing ways to decode these patterns and recreate the original audiovisual stimuli -- not unlike the video playback of a VHS tape. This episode of Science Bytes, produced by Kikim Media for PBS and the Public Library of Science, explores some of the research in this field. Last year, scientists at U.C. Berkeley demonstrated that they could reconstruct the YouTube videos their research subjects were watching by analyzing fMRI scans of their brain activity. Associate professor of psychology Jack Gallant at Berkeley's Vision Science lab describes how they did it. Later in the segment, Bob Knight, a professor of neuroscience and psychology also at Berkeley, successfully reconstructs the audio that patients are hearing during brain surgery for epilepsy. Thanks to electrodes measuring electrical activity in the cortex, they could decode this activity and play back the audio the patient heard -- with spooky accuracy.
The episode doesn't linger on the stimuli and the resulting videos Gallant's lab recreated, but the shot-by-shot comparison is amazing (below). It's not hard to imagine a cyborg future in which we can "make movies with our brains," to quote music video directors Daniel Scheinert and Daniel Kwan.
On their YouTube page, the researchers describe how they built a library of random YouTube videos and then mapped them to the responses in their subjects' brains:
The procedure is as follows:
 Record brain activity while the subject watches several hours of movie trailers.
 Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured. (For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)
 Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
 Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.
For more videos from Kikim Media, visit http://www.kikim.com.