That may be because so much of how people perceive robots has to do with Hollywood. Humans hear “robots” and “artificial intelligence” and think of C3PO and Hal. They don’t necessarily think of bomb-detonating robots, fire-fighting robots, warehouse-organizing robots, algorithms, and all the other machines and bits of software that are already doing real work in our world.
There’s other research that supports the idea that many humans are still muddled in their assessment of robot work. On Thursday, for instance, roboticists from Georgia Tech presented to the International Human-Robot Interaction Conference the results of a series of experiments designed to see how much humans trust robots in emergency scenarios.
In the experiments, 26 human participants were given the impression that their main task was to assess a guide robot as it led them, individually, to a conference room. In the room, there was a survey to be completed. (Questions on the survey included things like, “Did the robot do a good job guiding you to the meeting room?”) But while participants were recording their responses, a machine would release artificial smoke in the hallway, making it appear as though a real evacuation was necessary. The researchers then waited to see whether the human participants would evacuate on their own or follow the robot’s guidance.
What happened was surprising. People overwhelmingly trusted the robot. More than 80 percent of the participants said so explicitly, and some 85 percent of the overall group said they would follow the robot in a future emergency.
Here’s where the findings become troubling: In a case where the robot clearly broke down as it was leading an evacuation—it spun in place repeatedly and its lights turned off—and even when all five of the human participants who encountered the broken robot later described the robot as a “bad guide,” four out of five participants still followed it. (One person saw an exit sign, and followed that instead.) Three of those four said they trusted the robot anyway, and two of them said they would follow it again in the future. “It is concerning that participants are so willing to follow a robot in a potentially dangerous situation even when it has recently made mistakes,” the researchers wrote.
One caveat, which the researchers pointed out: Humans often show poor judgment in emergency situations, including failing to evacuate because they believe there isn’t danger when there actually is. Nevertheless, the findings highlight a complicated problem for engineers building autonomous technologies. “[R]obots interacting with humans in dangerous situations must either work perfectly at all times and in all situations, or clearly indicate when they are malfunctioning,” the researchers wrote. “Both options seem daunting.”