The Human-Robot Trust Paradox

People think machines are taking everyone’s jobs. Except their own.

Nadine, a humanoid created by a team of researchers at Nanyang Technological University, photographed in Singapore, in March 2016. (Edgar Su / Reuters)

People are generally pretty terrible at predicting the future.

We never found elephants on the moon, or invented working mind-reading machines, or figured out how to use telephones to commune with the dead.

But we do have robots! Ancient automata notwithstanding, the modern robot came straight out of the annals of science fiction, as if realized by sheer imagination. People have been infatuated with them ever since. Obsessed, and also a bit anxious—and perhaps for good reason: In 2013, researchers at Oxford predicted that nearly half of all jobs in the United States are at high risk of being computerized in the next decade or two.

Many people in the United States seem to think that’s a reasonable estimate. A new survey by the Pew Research Center finds that 65 percent of Americans expect that, by the year 2065, robots and computers will “definitely” or “probably” do much of the work currently done by humans.

The weird thing is: an even bigger portion of those surveyed—some 80 percent—believe their own jobs will still exist in 50 years, and that their professions will remain largely unchanged. Perhaps it’s plain old wishful thinking, but Pew’s findings also highlight a longer thread in the evolving human-machine relationship. People may be captivated by robots, but we’re not exactly great at contextualizing their place in our world.

That may be because so much of how people perceive robots has to do with Hollywood. Humans hear “robots” and “artificial intelligence” and think of C3PO and Hal. They don’t necessarily think of bomb-detonating robots, fire-fighting robots, warehouse-organizing robots, algorithms, and all the other machines and bits of software that are already doing real work in our world.

There’s other research that supports the idea that many humans are still muddled in their assessment of robot work. On Thursday, for instance, roboticists from Georgia Tech presented to the International Human-Robot Interaction Conference the results of a series of experiments designed to see how much humans trust robots in emergency scenarios.

In the experiments, 26 human participants were given the impression that their main task was to assess a guide robot as it led them, individually, to a conference room. In the room, there was a survey to be completed. (Questions on the survey included things like, “Did the robot do a good job guiding you to the meeting room?”) But while participants were recording their responses, a machine would release artificial smoke in the hallway, making it appear as though a real evacuation was necessary. The researchers then waited to see whether the human participants would evacuate on their own or follow the robot’s guidance.

What happened was surprising. People overwhelmingly trusted the robot. More than 80 percent of the participants said so explicitly, and some 85 percent of the overall group said they would follow the robot in a future emergency.

Here’s where the findings become troubling: In a case where the robot clearly broke down as it was leading an evacuation—it spun in place repeatedly and its lights turned off—and even when all five of the human participants who encountered the broken robot later described the robot as a “bad guide,” four out of five participants still followed it. (One person saw an exit sign, and followed that instead.) Three of those four said they trusted the robot anyway, and two of them said they would follow it again in the future. “It is concerning that participants are so willing to follow a robot in a potentially dangerous situation even when it has recently made mistakes,” the researchers wrote.

One caveat, which the researchers pointed out: Humans often show poor judgment in emergency situations, including failing to evacuate because they believe there isn’t danger when there actually is. Nevertheless, the findings highlight a complicated problem for engineers building autonomous technologies. “[R]obots interacting with humans in dangerous situations must either work perfectly at all times and in all situations, or clearly indicate when they are malfunctioning,” the researchers wrote. “Both options seem daunting.”

Both the Pew study and the Georgia Tech experiment reveal a paradox: Humans often say they don’t trust machines, and they acknowledge robots are likely to replace human jobs on a massive scale—and yet they don’t personally feel threatened or endangered by them.

“It’s so weird in our culture because on one hand there’s anxiety about robots and on the other hand people are fascinated by them,” said Kate Darling, who studies robot ethics at MIT Media Lab. “We’re hypocritical.”

Of course some groups of people are more anxious than others. In the Pew study, people whose jobs involve manual labor were the most concerned about losing their jobs—overall, and to robots specifically. And everyone surveyed was more anxious about other people than machines. “One in ten workers are concerned about losing their current job due to workforce automation, but competition from lower-paid human workers and broader industry trends pose a more immediate worry,” Pew said.

Other groups were significantly less concerned about how robot workers might affect them. A relatively small share of people who work in the government, nonprofit, and education sectors reported believing widespread workforce automation was inevitable; 7 percent compared with 13 percent of those who work for small businesses, medium-sized companies, and large corporations.

It seems characteristically human, the feeling that one’s job couldn’t possibly be performed better by someone else—let alone by a robot. But technological history proves that machines can and do replace people over and over again. Humans may be hypocritical about their love-hate relationship with robots, but they’re probably in denial, too.