Ng suggests thinking about it this way: a Lytro photograph is not a traditional still image, but an array of software simulations of many possible virtual cameras, coupled with an abundance of light data describing a particular scene. Imagine if you could freeze time at the moment you snap the shutter, creating not just one exposure, but many variations of that exposure, each with different points of focus and depths of field. The Lytro does something analogous, but with novel optics and computation instead of magic.
When a Lytro image is downloaded from the device for development, any one of these virtual cameras can be selected, each of which sees the captured scene in a different way. The process by which a virtual cameras is configured and selected is based on ray-tracing techniques -- the same methods used to render a three-dimensional scene in computer graphics. Instead of creating a simulated scene from the interactions of virtual objects as pixels, Lytro's darkroom configures a real scene given a particular light vector to a particular focal plane.
These days, a new gadget is hardly a novelty. Now that the Lytro camera has been out in the wild for nearly a year, we can stop thinking about it as a gadget and start considering the aesthetics that the gadget affords. We can finally stop answering, "What is the Lytro?" and move on to "What does the Lytro mean for photography?" Just as understanding Cartier-Bresson's Leica helps us appreciate the concept of street photography, so we need understand more about how the Lytro works to grasp its unique visual aesthetics.
* * *
In a traditional camera, the film or sensor is exposed to light focused by a lens on a single focus plane. In other words, the light that enters the camera through the lens strikes the exposed surface having been bent in just one way -- the manner that corresponds with the lens's current focus and aperture.
Imagine you are operating a single-lens reflex (SLR) camera. As photographer, you can zoom, focus, and stop down the lens at whim before depressing the shutter. Any of these combinations of optical circumstances represent "possible photographs" of a particular subject at a particular time -- each with their own unique properties: a point of focus, a depth of field, an exposure, and so forth. But once the shutter is pressed, all those possible photographs are collapsed into one single photograph, just like a life decision collapses all the possible alternate timelines that radiated from the moment just beforehand.
The Lytro camera implements a different idea: What if that single shutter exposure could capture more than just one of the possible photographs of a scene at a single moment in time? Mechanical examples of light field photography (or plenoptic photography) date back to the early 20th century, and researchers have been implementing workable (if commercially inviable) versions of it since the 1980s. A "light field" describes the amount of light emanating in every direction from a single point. It is therefore "plenoptic" (or "full of sight") -- that is, it is a camera that can see all possible paths light could take from the lens to the sensor, rather than just the single path recorded by an ordinary camera.
To accomplish this feat, light field cameras have two lenses. One is the traditional lens assembly that focuses light entering the camera's dark chamber -- the object you would normally call a "camera lens." The second is really an array of small microlenses, which are positioned at the rear of the dark chamber in front of the sensor. This array is something like an insect's eye, but flat instead of rounded, and it's the key to light field photography. Each of the microlenses focuses on a different part of the dark chamber itself -- taking a tiny picture of one part of the light field. The resulting digital negative is not a single photograph, but an array of tiny photographs, each representing a unique view of the scene, from a slightly different perspective and with a slightly different point of focus.
Insects with compound eyes have a different perceptual apparatus than bifocal vision creatures like humans do, and their manner of perceiving the world is very different from our own. Looking at a raw light field camera "negative" is a bit like looking at a representation of a fly's vision: a mess of tiny, seemingly indistinct spherical renditions of a common scene. To assemble this raw data into a result deserving of the name "photograph" requires a unique process. Think of it as the plenoptic equivalent of developing film.
Lytro's software is based on Ng's doctoral research in viable methods for plenoptic image capture. The device combines each of the microlens perspectives on the scene into a single result. Because each microlens only records visual data on a small segment of the digital sensor, a Lytro image is much lower in resolution than a typical digital photograph. But in exchange, the Lytro produces not one image, but a set of virtual cameras capable of rendering a whole field of possible images, which can be configured and combined via software.
So what, though? What can you do with a light field photograph that you can't do with an ordinary one?
In the computer science research community, plenoptic photography is often suggested as a darkroom technique meant to assist the photographer in the production of a final, traditional, static image. And such a use case is very much possible with Lytro's software, even if the resulting static image is mostly good for screen display or small prints due to its low resolution (1080x1080 pixels) compared to today's consumer digital cameras. But Lytro's intended use is surprising: instead of using the light field data to "develop" a final image, they offer a Flash-based web viewer which allows a user to actively interact with the image, manipulating it live.
But how does one interact with a light field photo? For one part, such images are refocusable in post-processing. Go ahead and try it with the image above, changing the focus to the different tealight candles on the table. In his dissertation, Ng discusses a common case in which such a scenario would be desirable. When taking a portrait, it's common to use a large aperture to produce a shallow depth of field to isolate the subject. But given such a narrow margin of error, it's easy to misfocus due to subject or photographer movement. In this case, the decisive moment might have been captured -- a particular facial expression -- but focused too far: at the ear instead of the eye, for example. A light field photographer could correct this fault and produce the desired focal plane through a virtual camera in the Lytro digital darkroom.
This sort of example has given Lytro's the reputation of a "focus-free" camera, but that's not really an accurate depiction. After all, Cartier-Bresson's Leica images were largely focus-free, since he zone-focused at smaller apertures to insure that most of the scene would be in focus anyway, allowing the photographer to concentrate on anticipating and capturing the decisive moment. Likewise, point-and-shoot cameras and many cell phone cameras have small focal ratios thanks to very wide lenses, making them shoot almost an entire scene in focus anyway. Focus-free is different from refocusable.
The result offers a kind of visual pun or reveal that forms the current Lytro aesthetic, for better or for worse.
Even given that caveat, Lytro light field images aren't fully refocusable -- that is to say, all the possible virtual camera configurations in a living negative don't correspond with all the possible real focus configurations of all possible traditional cameras. This is because the light field the Lytro captures is not in the world, but in the camera's dark chamber.