How DARPA's Augmented Reality Software Works

Why is the military succeeding where Google Glass failed?
More
DARPA

Six years ago, the Defense Advanced Research Projects Agency (DARPA) decided that they had a new dream. The agency wanted a system that would overlay digital tactical information right over the top of the physical world.

So, they created a program called Urban Leader Tactical Response, Awareness and Visualization (ULTRA-Vis) to develop a novel and sophisticated augmented reality system for use by soldiers.

Through half a decade and with the help of several military contractors, they succeeded. "To enable this capability, the program developed and integrated a light-weight, low-power holographic see-through display with a vision-enabled position and orientation tracking system," DARPA says

Using the ULTRA-Vis system, a Soldier can visualize the location of other forces, vehicles, hazards and aircraft in the local environment even when these are not visible to the Soldier. In addition, the system can be used to communicate to the Soldier a variety of tactically significant (local) information including imagery, navigation routes, and alerts.

Last week, I spoke with the core of the team for the lead contractor on the program, Applied Research Associates. They don't build the display—that was BAE Systems—but they do build the brains inside, the engine for doing the geolocation and orientation. 

They think that their software, which they call ARC4, could end up in consumer products, and quickly. As they imagine it—and the DARPA prototype confirms—ARC4-powered systems would go beyond what Google Glass and other AR systems are currently able to accomplish. 

In the following Q&A, we take a deep dive into how their technology works, what problems they solved, and how they see augmented reality continuing to develop.

There were four ARC associates on the line: Alberico Menozzi, a senior engineer; Matt Bennett, another senior engineer; Jennifer Carter, a senior scientist; and Dave Roberts, a senior scientist and the group leader in military operations and sensing systems.

Last year, I was obsessed with augmented reality. I was just very excited about it. But then when I looked at what, say, Google Glass, could do, I realized that it couldn't do much that was interesting in that realm.

CARTER: And they still can't.

I think I thought things were further along, because there was this time when people began to imagine what Google Glass might be able to do it, and those visions were more advanced than it is.

CARTER: You're right about the commercial space. People have shown things using really great graphic artists that they can't actually do. What we're talking about with ARC 4 is true augmented reality. You see icons overlaid on your real world view that are georegistered. And they are in your look direction in your primary field of view. So, basically, if you turn your head, you won't see the icon anymore. You can share information. You can tag stuff that's georegistered.

Google Glass was trying to do augmented reality. They were trying to do what we're doing. But I think they've failed in that they have this thing outside your primary field of view, and it's really information display, not augmented reality.

So, how'd you develop this system—what was your relationship with DARPA?

ROBERTS: We got started on all this six years ago. It started with the DARPA ULTRA-Vis program. We carried that through three different phases. It's now complete. We've been the prime contractor developing that technology. There were companies that came on and it became competitive. Over those different phases, we ended up doing well enough to carry on and the other companies dropped off. We think we're getting to that point where what we think of as augmented reality is going to become something that people see in the real world.

What were the big technology challeneges that you had to overcome? What were the things that you couldn't do six years ago?

ROBERTS: The two big fundamental technology challenges from the beginning have been number one, a display that can show information that's—I'm gonna get technical—focused at infinity with a large enough field of view and a high enough brightness that it's usable in outdoor conditions, so military folks in a squad can use it and see information overlaid on the real world.

The other big one was, I've got this display and I can put stuff on it that's bright enough to see, but how in the world can I make sure that this information is embedded in my real world. That's the pose estimation, or headtracking. It's the ability to know where I, as the user, am located and where I'm looking when I'm outdoors, so I can put information on top of the real world view so it's georegistered.

Basically, the system receives latitude, longitude, and elevation, three pieces of information associated with some object. And we get that over a network we're integrated with. But at the end of the day, the system has to take that information and render an icon out there in the real world that sticks exactly on what it is supposed to be sticking on. And that's the fundamental challenge we've been working on for these six years and can now do out in the field with real stuff.

When you looked at that as a technical challenge, break that down for me a little. What were the components of the challenge?

ROBERTS: There are sensors required to track a person's head, in terms of position and orientation. Right now, inertial sensors, gyroscopes—which are an angular rate sensor—or an accelerometer are used to understand the motion of the head. In addition, GPS is an available input to help understand position.

And then a magnetometer is typically used to understand azimuth, or where someone is looking, in heading. Those four pieces come together and people can fuse that data together to try to figure out position and orientation of the head. That's typically what people for the most part have been doing up until now.

There are problems with just using those sensors. One of the big problems is the use of a magnetometer. The earth's magnetic field is there and it helps sense it. It's a compass, basically. It's not terribly accurate, necessarily.

ALBERICO: The sensor may be accurate, but the field itself that it's trying to measure may not be just that of the earth. Obviously, that sensor becomes useful in its capacity to measure the earth's magnetic field to give you a measurement of azimuth, but if you have other things that superimpose their magnetic field on top of the earth, then end up measuring that, and it becomes useless for figuring out your azimuth. It's not so much any noise or inaccuracies of the sensor itself, but disturbances of the magnetic field itself.

What are a few of the big disturbances you see?

ALBERICO: It doesn't take much because the Earth's magnetic field is relatively weak, so it's disturbed by anything that's ferromagnetic. Steel, iron, or anything that generates a magnetic field, like electric motors. If you were to walk by a car, that's enough to give you a disturbance that would throw your azimuth estimate off a little bit if you just based it on the magnetometer.

So you need to introduce some other signal that can serve as a correction for the raw data coming out of the magnetometer.

Jump to comments
Presented by

Alexis C. Madrigal

Alexis Madrigal is the deputy editor of TheAtlantic.com. He's the author of Powering the Dream: The History and Promise of Green Technology. More

The New York Observer has called Madrigal "for all intents and purposes, the perfect modern reporter." He co-founded Longshot magazine, a high-speed media experiment that garnered attention from The New York Times, The Wall Street Journal, and the BBC. While at Wired.com, he built Wired Science into one of the most popular blogs in the world. The site was nominated for best magazine blog by the MPA and best science website in the 2009 Webby Awards. He also co-founded Haiti ReWired, a groundbreaking community dedicated to the discussion of technology, infrastructure, and the future of Haiti.

He's spoken at Stanford, CalTech, Berkeley, SXSW, E3, and the National Renewable Energy Laboratory, and his writing was anthologized in Best Technology Writing 2010 (Yale University Press).

Madrigal is a visiting scholar at the University of California at Berkeley's Office for the History of Science and Technology. Born in Mexico City, he grew up in the exurbs north of Portland, Oregon, and now lives in Oakland.

Get Today's Top Stories in Your Inbox (preview)

Why Are Americans So Bad at Saving Money?

The US is particularly miserable at putting aside money for the future. Should we blame our paychecks or our psychology?


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

The Death of Film

You'll never hear the whirring sound of a projector again.

Video

How to Hunt With Poison Darts

A Borneo hunter explains one of his tribe's oldest customs: the art of the blowpipe

Video

A Delightful, Pixar-Inspired Cartoon

An action figure and his reluctant sidekick trek across a kitchen in search of treasure.

Video

I Am an Undocumented Immigrant

"I look like a typical young American."

Video

Why Did I Study Physics?

Using hand-drawn cartoons to explain an academic passion

Writers

Up
Down

More in Technology

Just In