In a recent report, the Pew Research Center found that Americans are more worried than they are enthusiastic about automation technologies when it comes to tasks that rely on qualities thought to be unique to humans, such as empathy. They’re concerned that, in lacking certain sensibilities, robots are fundamentally limited in their ability to replace humans at those jobs; they don’t, according to the report, trust “technological decision-making.”

This skepticism tends to steer people away from using those technologies. Just under 60 percent of respondents said they wouldn’t ride in a driverless car (in part because they’re worried about ceding control over to machines) or use a robot caregiver (in part because there is no human touch or interaction). Seventy-six percent said they wouldn’t apply for a job that uses a computer program to select applicants, either.

But if being “human” means making thoughtful decisions and having strong interpersonal skills, as survey respondents indicated, how “human” are humans? It turns out that the inclination to exalt human qualities might be misguided—and that robots might actually be preferable in certain jobs that count on those qualities. If that’s indeed the case, education and training programs will have to take an honest look at how great humans actually are—lest fears of robots taking over become a self-fulfilling prophecy.

Human drivers don’t seem all that “human” when it comes to thoughtful decision-making. Federal fatal-crash data show that despite reductions in the number of deaths due to distracted or drowsy driving, those related to other reckless behaviors—including speeding, alcohol impairment, and not wearing seatbelts—have continued to increase. Roughly 37,000 of last year’s fatal crashes were attributed to poor decision-making.  

Humans aren’t necessarily better than robots at caregiving, either. The American Psychological Association in 2012 estimated that 4 million older Americans—or about 10 percent of the country’s elderly population—are victims of physical, psychological, or other forms of abuse and neglect by their caregivers, and that figure excludes undetected cases.

Nor do they inherently excel at interpersonal skills. Humans incessantly use “strategic emotions”—emotions that don’t necessarily reflect how they actually feel—to achieve social goals, protect themselves from perceived threats, take advantage of people, and adhere to work-environment rules. Strategic emotions can help relationships but, if they’re detectable, they can harm them, too.

As an example, Jonathan Gratch, the director of emotion and virtual human research at the University of Southern California’s Institute for Creative Technologies, pointed to customer-service representatives, who tend to follow a script when speaking with people. Because they rarely express genuine emotions, they aren’t, according to Gratch, “really being human.” In fact, these rules surrounding professional conduct make it easier to program machines to do that sort of work, especially when Siri and Alexa are already collecting data on how people talk, such as their intonations and speech patterns. “There’s this digital trace you can treat as data,” he said, referring to the scripts on which customer-service reps rely, “and machines learn to mimic what people do in those tasks.”

These statistics complicate the notion that robots are inherently inferior when it comes to such tasks. But training and education programs seem to be focusing more on technical skills, like computer literacy ones, and that could make robots all the more grave a threat to employment. Eighty-five percent of respondents in the Pew report were in favor of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans—namely those in the construction sector, as well as those in realms such as agriculture and forestry. But people across sectors—not just those that rely on rote, physical labor—will see their jobs affected by technology given its ability to mimic human abilities.

Education and training can ease Americans’ worries. In a time of big data and LinkedIn, which has detailed information on which sectors of the economy are growing and shrinking, and which skills employers are looking for, training programs can help some people “upskill,” or learn additional skills critical to an evolving sector. Of course, training won’t be enough; the degree to which policymakers take seriously the threat of automation, and to what extent employers are forced to provide decent-paying jobs, mean that large-scale preparation is not guaranteed.

Yet despite widespread interest in professional development and lifelong learning, Aaron Smith, an associate director at the Pew Research Center and co-author of the report, has found in his previous research that policymakers and workers express different motivations when they talk of the need for training. While there are some politicians who think about automation as a labor challenge, workers across the board are thinking more about globalization, Smith told me. People, he said, are not “necessarily thinking about that training and development as something that will help them fight off the machines.”