Why we're in such a rush to create a hands-free tech experience
One of the bigger stories to come out of the Consumer Electronics Show so far this week has been the eye-controlled computing technology unveiled by the eye-tracking firm Tobii. As PC World describes the technology:
It's actually pretty simple (well, sort of). It works by shooting infrared lights into your eyes to cause red-eye (sounds dangerous, I know, but they assure me it's perfectly safe). By doing this, Tobii is able to create a 3D model of your eyeball and determine where your eye is relative to space. It then tracks the glint off of your eyeball to determine where your foveal vision, or sharp central vision, is, and, consequently, where you're looking.
Tobii's gaze-based system -- as space-age-y and cool-sounding as it is -- won't be heralding a golden age of eye-controlled computers anytime soon. While the technology makes a lot of sense for, say, game play (the demonstration given to reporters at CES Unveiled involved a game called EyeAsteroids, in which players blow up space matter with targeted glances), it's still hard to surpass the ease and accuracy of computers' touch-based infrastructure. Eyes -- blinking, winking, distractible -- are great at communicating emotion. They're less great at communicating intention.
Still, though, it's interesting to think about what could replace our hands -- or what couldn't -- as the primary paradigm of human/machine communication. We tend to assume that, if we're going to go hands-free, it'll be through voice control: Siri, Google's Voice Actions, etc. But the Tobii approach, as early-stage as it is as a technology, suggests the intriguing possibility that sight, rather than sound, will take on the responsibility of conveying our desires to our devices. (The eyes: the windows to the
Near-telepathic communication with our computers is fascinating for lots of reasons, not least of which is the fact that, until now, our relationship with our machines has been mediated, for the most part, mechanically. The push. The scroll. The swipe. The tap. The finger-as-medium -- digital technology, quite literally -- has defined our sense of our devices to the extent that, when we talk about the "tactile" quality of personal tech -- a keyboard yielding to the brush of a finger, a touchscreen responding to the warmth of a hand -- we're generally talking about, actually, the intimate quality of personal tech. We're getting at the idea that all those academic studies of people and their gadgets have made clear: that the technologies we use every day aren't just tools, but extensions of our worlds and our identities. Me, myself, and iPhone.
More to the point, we want it that way. However strange (and, okay, kind of creepy) it might seem to have the phrases "personal computer" and "knowing glance" in the same sentence, the alternative -- personal devices that aren't as fully personal as they could be -- is worse. Because, as resistant as we can be, culturally, to new technologies (the telephone will erode our privacy! The moving picture will destroy sociability!), we tend to come around to prefer intimacy over distance in the devices that help us navigate the world.
Eye control -- and, in fact, voice control (and, in fact, gestural control like the Kinect) -- appeal in theory because they're the logical extension of that intimacy. They suggest what might happen when we bypass the middlemachine -- the keyboard, the mouse, the touchscreen -- and communicate with our computers through the parts of ourselves that, poetry and experience tell us, are the most honest and obvious manifestations of Who We Are. That is to say, the new touchless interfaces let us simply use our eyes, our voices, and the way we move our bodies. They suggest our desire to take the barrier that divides man and machine...and erode it. In the blink of an eye.