Much like fables, family movies usually come with a lesson--appreciate your family (Home Alone), follow your dreams (Ratatouille) or empathize with your daughter/mother (Freaky Friday). But for the reigning box office champ, Real Steel, the take-home message seems to have been tailor-made for our modern times: Robots work best with humans, not against them.
It turns out, the U.S. Navy agrees. Navy Center for Applied Research in Artificial Intelligence researchers Laura Hiatt, Anthony Harrison, Ed Lawson, Eric Martison and Greg Trafton are developing robotic systems that can work naturally with humans. Though not nearly as advanced as their Hollywood counterparts, the scientists’ robots are capable of object detection, speaker recognition, and “theory of mind,” or the ability to reason about what others believe. “Systems like this have the potential to be very powerful,” says Hiatt, “as humans and robots have different strengths and teaming them together combines those strengths.”
Below, see the Navy’s robots at work in an eerie but entertaining clip that involves the classic robots-are-taking-over-the-world plot. Then, in the interview with Hiatt that follows, learn more about the tech embedded in George and Octavia, the real state of artificial intelligence, and the researchers’ plans for an on-the-field firefighting robot (movie option to be determined).
The Atlantic: What is the state of robotics today?
Laura Hiatt: The robots portrayed in movies are gross exaggerations of the realities of robotics. They make a great story; however, the underlying science just does not yet exist. That being said, artificial intelligence in general is more pervasive in today's society than many people realize. Some rice cookers use neuro-fuzzy logic as part of their software; many cars have various aspects of intelligence, such as speech recognition and navigation capabilities. Even some websites have "intelligence" in the sense that they can use your previous history to predict what products may interest you in the future. The technologies are simpler than a full robotic system, but they are there.
On a related note, people often misuse phrases like "fully autonomous" when describing robotic systems. We have a joke around here: If we were to build a fully autonomous robot, it'd be off on the beach somewhere sipping chilled motor oil. None of us are fully autonomous, and in large part we do what we are told by someone else to do or work under someone else's supervision. The robots that we are working on are, in a sense, the same. We are developing them to have the core capabilities of interacting with the world and with humans, but not to operate completely on their own with no human supervision.
What was the objective of the "Robotic Secrets Revealed" series?
The "Robotic Secrets Revealed" series was started to communicate our advancements in the fields of robotics and human-robot interaction, and demonstrate our robots' state-of-the-art capabilities. It highlights many of our different research streams and brings them together in entertaining stories. For example, the first episode highlights our work in gesture recognition, eye-gaze following and embodied cognition. The second episode demonstrates our work in theory of mind, speaker recognition, and object recognition.
One of the Navy's objectives for this area is to develop robots that are able to work side-by-side with humans. The benefits are many. Robotic teammates can scout ahead in situations where humans cannot easily go, such as an environment with poor air quality, and can provide information about the environment back to their human teammates. This means not only giving a robot the capability to fulfill the team's duties, but also ensuring that it can interact and collaborate with humans in a natural way. Otherwise, human teammates will have difficulty communicating with the robot or, even worse, will not trust it. Natural human interaction, however, is an extremely complex process that includes gestures, facial expressions, eye-gaze directions, tone of voice, word choice - the list goes on. Our work provides robots with these key components to interaction, with the underlying belief that if robots "think" and act like humans, humans will be able to interact with them more naturally.
Who is your intended audience for this?
The artificial intelligence community. A prominent group called the Association for the Advancement of Artificial Intelligence has a video competition every year where researchers from around the world submit videos that document the state-of-the-art in artificial intelligence. We submitted our video to this competition, and won the "Best Educational Video" award. Aside from that, however, we also wanted to reach a more general audience to show how much fun robotics can be. To some extent, that's why there are two parts to the video: the first part can be enjoyed by a casual viewer. The pop-ups and reveal later on cater to the more technical artificial intelligence community.
Could you talk about the characters in the film?
In the video there are two robots, Octavia and George. George is an older robot. He has the capability of natural language understanding as well as fiducial recognition. He can move around, use sonar to keep from bumping into things, and can demonstrate facial expressions via a face displayed on his monitor. He has a microphone to listen to speech, and vision is done using an old Sony pan-tilt camera.
Octavia, on the other hand, is a much more advanced MDS or mobile, dexterous, social robot. In addition to natural language understanding and fiducial recognition, she can recognize objects; recognize people from their voice, face, and clothes; and localize speakers. She has advanced higher-level reasoning skills, including some theory of mind capabilities. She has an expressive face, and arms and hands that can make gestures and manipulate objects. She also can move around using her Segway base and a laser range finder to keep from bumping into things.
The contrast between George and Octavia demonstrates, in part, the benefits of our work. For example, when Tony comes in the second time, and says that they'll wait for me to come in so they can start the motor testing, George simply says, "OK," which is not exactly the response of a helpful teammate! Octavia, on the other hand, notices the error and uses theory of mind to correct Tony regarding my whereabouts.
Some aspects of the robots' behaviors are hard-coded to make a more entertaining video. All of the facial expressions in the video were canned. They were not autonomously deciding to make those faces or display emotion. In addition, much of the more theatrical dialogue, such as Octavia begging to not be turned off, was also put in solely for dramatic effect. We do have interest in developing emotive capabilities for the robots, but we are not there yet.
What's next for your team's research?
We have two projects just starting that we are particularly excited about. One of them is developing a firefighting robot, meant to collaborate with human firefighters on Navy ships. Fires on Navy ships are extremely dangerous, and a firefighting robot will help to limit the damaged caused by the fire, potentially saving lives. A second project we have starting up is one in which we are going to develop robots that operate continuously for long periods of time. Typically, robots are designed to perform specific functions and so are only operated for short time frames. In contrast, we want to build robots that can operate over long periods of time, resulting in robots that can learn how to perform their jobs better as well as learn about new situations and tasks.
What's next for this video series?
You'll have to stay tuned to find out!
For more videos from the Navy Center for Applied Research in Artificial Intelligence, visit their website.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.