After Sergey Brin, co-founder of Google, wore a prototype to a charity event in San Francisco on April 5, 2012, there was a lot of buzz about Google Glass. Early in 2013, the buzz intensified as an “Explorer Edition” was made available to a few thousand testers and developers and a consumer version was promised for later the same year. But the din of anticipatory chatter, which fed for a while on first hand reports and video that captured user experience, has noticeably diminished. The release date for the commercial version has been pushed forward until sometime in 2014, but even that remains vague. Meanwhile, Google will make yet another Explorer Edition available to yet another cohort of selected testers and developers.
To an outside observer, it seems as if the opposition to Google Glass is strong. From the very beginning, people asked why anyone would want to sit across the table from a woman wearing Glass (not “pair of”) equipped with the Winky app, the one that allows her to start an invisible video recorder by blinking in a particular way? What about driving? A teenage boy could watch Louis C. K.’s latest stand-up rant hovering in the same field of vision as the road he is driving on. But should he?
As the tech-visionaries always say, however, it has ever been thus. The more radical the innovation, the more inevitable the prophesies of doom and the more severe the envisioned losses. And the radical aura of this particular innovation is undeniable, immediately apparent, akin in that way to the original potency of “moving pictures” and “tele-vision.”
With Glass, a developmental logic built into the very nature of representational technologies may be reaching an intrinsic limit. The contact lens version of Glass is already on the drawing boards and chips implanted in our brains that conjure up "screens" in our heads may some day be possible and a lot of people find these prospects deeply disconcerting. We are entitled to wonder if—this time—opposition to technological innovation may prove to be more stubbornly grounded than it has been in the past. Is the resistance to Glass qualitatively different and more profound than practical concerns about safety and privacy?
* * *
This might be a good time to step back a bit and ask what kind of medium a screen is: what are its particular effects? When Marshall Mcluhan popularized media theory in the 1960s, under the oft-cited (much misunderstood) slogan, “the medium is the message,” he was calling attention to the fact that kinds of media, quite apart from content, have significant psycho-social consequences. He argued that reading and writing, for example, condition the mind to move sequentially, to follow a train of thought to a conclusion, to notice causal order in the workings of the world, as scientists do, and to impose order as well, as technologists and administrators do. In effect, he claimed that print literacy was ultimately responsible for the rise of modernity because the written/printed word is a stable thing, abiding in space. The spoken words through which traditional societies communicate pass away as they are being born. Modern progress was made possible by stable representations of the way things are and were. In non-literate societies there was no record, so there was no plan. In such a society, people could argue over who should succeed a deceased king. But no one would ever ask: what do we need these kings for in the first place?
So what corresponds, under the regime of multi-media, to oral ephemerality and literate stability? A tricky question because multi-media, in their plenitude, defy categorical conception. People didn’t stop talking when they began writing and they didn’t stop writing when they began telephoning and we are still talking and writing even as we skype and tweet. Under those circumstances what medium could possibly qualify as constitutive across the spectrum? The obvious answer is screens: screens as screens, regardless of what’s on them. This is the age of multiple screens of every conceivable size and shape, lodged in every nook and cranny, upon every feasible surface—and now Google Glass proposes to fuse the very world with a personal screen.
The first and simplest thing to notice, perhaps ultimately the most important, may be this: people tend to watch. We have all felt it. No matter what’s on them, we are drawn to screens. There is something about the framing, entities contained, movement within, stillness without. Screens compel attention the way certain dollhouses do, or Joseph Cornell’s boxes. Upon screens we view worlds from beyond, as gods do.
* * *
The screen is a meta-medium, which may be why it has so far eluded systematic treatment. It channels and filters other media. Speech and music, charts and maps, writing, pictures, film, video—the screen conveys them all. But the screen does have its own characteristics, its own meta-qualities. Above all, it displays. But, in displaying this, it hides that. Unknown treasures lurk perpetually just out of sight, clamoring sotto voce for your attention. What are you missing, at each moment, as the price of this display? Displaying, hiding—simultaneous effects constituting a quality for which there is as yet no word, but it is a quality that conditions the way we conceive and perceive of everything, the way we live now.
There are more implications. One kind of screen, the old kind, the movie screen, displays what others produce. But newer screens, personal screens, also display what we produce. All the world’s creations congregate there on equal terms, including our own creations. And they often fuse (mix, mash-up). One can collaborate with Picasso in MacPaint and Photoshop or jam with Rihanna on Virtual DJ or create parody versions of shows we love to loathe. Personal screens are especially fascinating, as fascinating as I am.
No wonder Hollywood feels threatened. It has reached the point where the original monarchs of the only screen that used to matter are running ads for movies in movie theatres. Not for particular movies, not previews; these ads come before the previews. They’re ads for movies in general, for the “big screen experience.” Watch for the one that ends with a TV blowing up and the co-opted tag line “Go Big or Go Home.”
The essence of the threat is easy to discern. It’s in the special intensity, the devotional glow you see on the face of a stranger in some random public place, leaning over her handheld device, utterly absorbed, scrolling through her options or matching twitter-wits on a trending topic, feeling the swell of attention rising around her as she rides an energy wave of commentary, across the country, around the world—it’s like the touch of a cosmic force, thanks to the smallest and most potent of all personal screens, the one on her smart phone. Sum it up this way: that screen is the one she can take pictures through as well as watch pictures on; hence, that special intensity. It testifies to the power of that dual aspect of display, a reciprocal intimacy no engagement with any other medium, let alone reality, can match.
Video games worry Hollywood too, especially since 2008 when earnings for games first exceeded earnings for films. In this case, the response was shameless imitation and coerced synergy. Most of the big action/fantasy movie franchises today are also video games and video games become movies, or inspire them. The screen experience offered by Grand Theft Auto on that monster TV in the basement might seem at first glance to be very different from the experience the smart phone provides. But at the level of the medium, apart from the content, a surprising affinity is apparent. The intent expression on the face of a 14 year old boy making his bloody way through battalions of lurching zombies in House of the Dead shows the same quality of engagement that we find in the young woman uploading photos to her Facebook page from her Samsung Galaxy as she waits for the train. That is because the dual aspect of display is at work in both cases.
Here is the essence of it in the case of the video game. A seasoned gamer has mastered the console. He isn’t conscious of his physical situation. He presses the buttons to turn and shoot and jump without thinking about them. He becomes the agent on the screen. There is no gap between his dirty little 14 year old thumb and his avatar’s massive biceps as it wields that enormous gatling gun against the zombie horde. He is the “first person shooter.”
As a first person shooter, you get to perform and you get to watch at the same time. The powers and pleasures of two kinds of centrality—spectator and star—have merged. An untapped possibility for synaptic closure has been realized and an historically unprecedented form of human gratification attained. No wonder those games are addictive.
This special form of reciprocity, more intimate than any other, also shapes life on the smart phone. There you also engage with yourself, with your world, on this new plane of being where agent and observer are fused. But the smart phone ups the ante. It introduces just enough distance, just enough lag time, between you and your doings on the screen to allow for an endless cascade of tiny moments of arrival, of recognition. Each prompt, each response, intercedes between you and the representations of yourself and your world that you are both producing and contemplating. With touch screens, the intercession effect is exquisitely enhanced. You are so close, physically, to those charming icons (they quiver with delight at the prospect of your arrangement of them). You are so very close to all those morphing denizens of pad and tablet land—and yet you can’t actually touch them. You touch instead a screen that both welcomes and deflects your fingertips. The mimesis of the mirror—so literal, so static, so immediate, so predictable—is transcended by that degree of separation. Now you get to dance with yourself, with extensions of yourself, and be yourself too. Watch closely the next time you see someone doting over that precious device. It is as if a defunct genetic program for primate grooming behavior has been hijacked and all that fingertip care is being lavished now on the body of a mini-me—my most faithful companion, my abiding reflection, my self, my other.
* * *
If Google Glass should fail to catch on, if it ends up on the “meh” list in the Sunday Times Magazine, if most people decide they just don’t want this climactic iteration of the screen after all, there will be many reasons given. Those privacy and safety concerns will likely be paramount because they are publicly definable “issues,” so evident, so debatable. But if people also say “I just don’t like it, I don’t like the experience,” it will be because, in fashioning the ultimate personal screen, Google violated the very conditions that made screens so compelling in the first place: the containment of the frame, the placement of the screen on a device—an entity among others—a placement that allows us to look upon the screen from beyond. The mind’s coherence is grounded in the way our bodies are oriented—left and right, up and down, near and far, in and out—and especially in the way we can face or turn away from other things in a surrounding world that contains us all equally. The hovering fusional image Glass provides will disturb those primal orientations. If people choose to stay true to their old-fashioned tablets and smart phones, it will be because the body of the device, especially the portable device that proffers the screen as its face, turned out to be as essential to the magic as the screen itself.
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.