All of these photographic methods still make the tacit assumption that the image originally produced is complete and atomic, light having been bent through a lens to expose a chemical film or photosensitive digital sensor all at once, forever.
An image taken with a Lytro camera is not really an image, but a machine capable of producing many possible renditions of a similar image
It's this aesthetic assumption that the light field camera challenges, embodied by the Lytro, which debuted roughly a year ago. It's a camera that hopes to rethink the legacy of the Cartier-Bresson decisive image. The Lytro takes photographs in a different way, a manner that's almost incomprehensible to anyone who experienced photography in the 20th century -- which is to say, nearly everyone alive.
Lytro cameras create what the company calls "living pictures." It's a bit of a misnomer, especially since the term "living picture" (or tableau vivant) already refers to a live theatrical scene in which costumed actors pose in a live scene without movement or speech. A more accurate layman's term would be "living negative," even if that phrase is guilty of anachronism. An image taken with a Lytro camera is not really an image, but a machine capable of producing many possible renditions of a similar image, any one configuration of which can be chosen in the unique digital darkroom that is Lytro's desktop software.
Ng suggests thinking about it this way: a Lytro photograph is not a traditional still image, but an array of software simulations of many possible virtual cameras, coupled with an abundance of light data describing a particular scene. Imagine if you could freeze time at the moment you snap the shutter, creating not just one exposure, but many variations of that exposure, each with different points of focus and depths of field. The Lytro does something analogous, but with novel optics and computation instead of magic.
When a Lytro image is downloaded from the device for development, any one of these virtual cameras can be selected, each of which sees the captured scene in a different way. The process by which a virtual cameras is configured and selected is based on ray-tracing techniques -- the same methods used to render a three-dimensional scene in computer graphics. Instead of creating a simulated scene from the interactions of virtual objects as pixels, Lytro's darkroom configures a real scene given a particular light vector to a particular focal plane.
These days, a new gadget is hardly a novelty. Now that the Lytro camera has been out in the wild for nearly a year, we can stop thinking about it as a gadget and start considering the aesthetics that the gadget affords. We can finally stop answering, "What is the Lytro?" and move on to "What does the Lytro mean for photography?" Just as understanding Cartier-Bresson's Leica helps us appreciate the concept of street photography, so we need understand more about how the Lytro works to grasp its unique visual aesthetics.
* * *
In a traditional camera, the film or sensor is exposed to light focused by a lens on a single focus plane. In other words, the light that enters the camera through the lens strikes the exposed surface having been bent in just one way -- the manner that corresponds with the lens's current focus and aperture.
Imagine you are operating a single-lens reflex (SLR) camera. As photographer, you can zoom, focus, and stop down the lens at whim before depressing the shutter. Any of these combinations of optical circumstances represent "possible photographs" of a particular subject at a particular time -- each with their own unique properties: a point of focus, a depth of field, an exposure, and so forth. But once the shutter is pressed, all those possible photographs are collapsed into one single photograph, just like a life decision collapses all the possible alternate timelines that radiated from the moment just beforehand.
The Lytro camera implements a different idea: What if that single shutter exposure could capture more than just one of the possible photographs of a scene at a single moment in time? Mechanical examples of light field photography (or plenoptic photography) date back to the early 20th century, and researchers have been implementing workable (if commercially inviable) versions of it since the 1980s. A "light field" describes the amount of light emanating in every direction from a single point. It is therefore "plenoptic" (or "full of sight") -- that is, it is a camera that can see all possible paths light could take from the lens to the sensor, rather than just the single path recorded by an ordinary camera.
To accomplish this feat, light field cameras have two lenses. One is the traditional lens assembly that focuses light entering the camera's dark chamber -- the object you would normally call a "camera lens." The second is really an array of small microlenses, which are positioned at the rear of the dark chamber in front of the sensor. This array is something like an insect's eye, but flat instead of rounded, and it's the key to light field photography. Each of the microlenses focuses on a different part of the dark chamber itself -- taking a tiny picture of one part of the light field. The resulting digital negative is not a single photograph, but an array of tiny photographs, each representing a unique view of the scene, from a slightly different perspective and with a slightly different point of focus.