When a set of online teasers for a new camera called the Lytro appeared earlier this year, you could have been forgiven for seeing the invention as just another gimmick. The camera’s attention-grabbing feature is a kind of after-the-fact autofocus: with a click, any blurry portion in a picture can be snapped into sharpness—another step in the march of idiot-proof photography.
In fact, such image correction is merely a side effect of what is genuinely different about the technology. The Lytro, scheduled to reach buyers early next year, creates a wholly new kind of visual object, one that both exemplifies and exploits the way images are consumed in the digital era.
The underlying technique is called “light-field photography.” A traditional camera, of course, captures light reflected off its subject through a lens and onto a flat surface. Proper focus is important to ensure that the image you get is the precise slice of visual reality you want. But “computational photography,” pioneered by Marc Levoy of Stanford University and others, takes a different approach, essentially using hundreds of cameras to capture all the visual information in a scene and processing the results into a many-layered digital object. One of Levoy’s former students, Ren Ng, added the twist that resulted in the Lytro: instead of using multiple cameras, he integrated hundreds of micro lenses into a single device.