The tech world is brimming with optimism for our augmented-reality future. But what will happen when flawed, prejudiced people get their hands on these tools?
Racism is ugly to confront, and, like most people, I've got plenty of personal stories. My grandmother, bless her heart, was a wonderful grandmother, but like many Jewish people of her generation, she was incredibly racist, afraid of black people she didn't know. This fear caused her anxiety when she got the urge to go to a favorite restaurant. She loved the food, but, as she would derisively say, so did the schvartze (Yiddish slur for a black person).
What if she didn't have to see the black people at all? This possibility is what worries me about our augmented-reality future, which is (mostly) anticipated with optimism. If grandma had lived to see ubiquitous augmented reality, I suspect she'd put it to dehumanizing use, leaving for the restaurant with her goggles on (a less obtrusive artifact than the Coke bottle glasses she actually wore), programming them to make all dark skinned people look like variations of Larry David and Rhea Pearlman. As Brian Wassom -- who regularly writes on augmented reality -- notes, if apps can "recognize a particular shade of melanin, and replace it with another," racists could one day "live in their own version of...utopia."
Grandma might have even been able to get the desired results without making any effort. Perhaps the algorithms running her software would automatically personalize the viewing experience -- say, keeping an ongoing record of whom she looked away from, as well as other biological features that register discomfort, such as an accelerated heart rate. Biofeedback could safely cocoon us in an amped up version of the filter bubble.
Disturbing as this scenario is, it barely scratches the surface of what could come to pass. Augmented reality users could do much more than ignore minorities -- they could track them. If minorities are dangerous, they'd reason, you want to know where they are at all times. Otherwise, you're vulnerable. Science-fiction author Tim Maughan has envisioned horrendous possibilities, expressed to me in private correspondence: augmented reality warnings, like a "big floating arrows" that identify people to be avoided from miles away, or a navigation app that steers users away from racially undesirable neighborhoods and establishments.
Of course, racist appropriations of technology long preceded digital culture. In The Whale and the Reactor, Landon Winner contends that in the mid-20th century, Robert Moses embedded his racist intentions into the very materiality of Long Island bridges, designing the overpasses to be high enough for cars to pass under, but too low for buses to handle. This "strategic architecture of control" enabled "automobile-owning whites of 'upper' and 'comfortable middle' classes" to use the parkway system to get to Jones Beach, while keeping away "poor people and blacks, who normally used public transit."
What's the best way forward? Banning objectionable reality filters is a futile endeavor, and strengthening "our society's ability to tolerate diverse viewpoints" is easier said than done. Instead, conscientious engineers should take up the case, fight fire with fire, and set their sights on designing anti-racist apps. Gary Marcus, author of a recent New Yorker essay on instilling ethics into driverless cars, offers a clever suggestion (also via private correspondence): "What about augmented reality apps that superimpose information about stranger's hobbies and family background, in order to increase empathy? Decades of research show that people are kinder to those that they view as human beings, rather than anonymous strangers. With the right apps, augmented reality could help." Whether or not this particular program proves effective, one thing is certain: A society committed to social justice needs to advocate for creative ethical solutions, not tolerate technological idealism.
This article available online at: