How to Hear Sign Language

Microsoft uses Kinect technology to turn gestures into text, and text into speech. 

The most significant barrier between the deaf and the hearing is, generally, language. Signs and speech use different methods to express themselves; that divide alone makes the everyday translation of sign-to-speech, and speech-to-sign, particularly challenging. 

Scientists at Microsoft and the Chinese Academy of Sciences, however, think they have found a way to bridge the gulf. And it involves the same technology used in video games. The Kinect Sign Language Translator project, released this week in a prototype form, aims to enable the hearing to understand sign language—and vice versa. 

It works, essentially, like this: The deaf person signs, and the system renders both a written and a spoken translation of those gestures. (It uses Microsoft's Kinect to process the gestures of the signs.) The system also processes a speaking person’s words, converting them into readable text. 

Which means that the interactions between the deaf and the hearing could soon become much less friction-filled than they've been in the past. A deaf doctor could communicate more fluidly, and more meaningfully, with a hearing patient. A hearing store manager could communicate with a deaf patron. The pair, in their interaction, wouldn't need extra knowledge of each others' languages; they would just need a tool to do the translating for them. 

In the video above, Dandan Yin, a 22-year-old computer science student who was born deaf, demonstrates the translation system. You can see her gesturing to a Kinect device that is, in turn, connected to a sign language prototype. You can see words appearing on the screen, translating the signs.

There is more work to be done with all this: The system isn't instant, meaning that one source of social friction—awkwardly waiting for a translation—is still part of its process. And there are more words for it to learn. But the idea is there, and the framework is there. Any system based on machine learning will get, theoretically and almost inevitably, better with age. You can see how, with incremental improvements, this is a technology that can bridge the gulf between the deaf and the hearing. As Stewart Tansley, director of Natural User Interface for Microsoft Research Connections, puts it in the video: "What, overwhelmingly, you feel when you see it working is a certain magic."

Presented by

Megan Garber is a staff writer at The Atlantic. She was formerly an assistant editor at the Nieman Journalism Lab, where she wrote about innovations in the media.

Never Tell People How Old They Look

Age discrimination affects us all. Who cares about youth? James Hamblin turns to his colleague Jeffrey Goldberg for advice.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Never Tell People How Old They Look

Age discrimination affects us all. James Hamblin turns to a colleague for advice.

Video

Would You Live in a Treehouse?

A treehouse can be an ideal office space, vacation rental, and way of reconnecting with your youth.

Video

Pittsburgh: 'Better Than You Thought'

How Steel City became a bikeable, walkable paradise

Video

A Four-Dimensional Tour of Boston

In this groundbreaking video, time moves at multiple speeds within a single frame.

Video

Who Made Pop Music So Repetitive? You Did.

If pop music is too homogenous, that's because listeners want it that way.

More in Technology

Just In