Human drivers don’t seem all that “human” when it comes to thoughtful decision-making. Federal fatal-crash data show that despite reductions in the number of deaths due to distracted or drowsy driving, those related to other reckless behaviors—including speeding, alcohol impairment, and not wearing seatbelts—have continued to increase. Roughly 37,000 of last year’s fatal crashes were attributed to poor decision-making.
Humans aren’t necessarily better than robots at caregiving, either. The American Psychological Association in 2012 estimated that 4 million older Americans—or about 10 percent of the country’s elderly population—are victims of physical, psychological, or other forms of abuse and neglect by their caregivers, and that figure excludes undetected cases.
Nor do they inherently excel at interpersonal skills. Humans incessantly use “strategic emotions”—emotions that don’t necessarily reflect how they actually feel—to achieve social goals, protect themselves from perceived threats, take advantage of people, and adhere to work-environment rules. Strategic emotions can help relationships but, if they’re detectable, they can harm them, too.
As an example, Jonathan Gratch, the director of emotion and virtual human research at the University of Southern California’s Institute for Creative Technologies, pointed to customer-service representatives, who tend to follow a script when speaking with people. Because they rarely express genuine emotions, they aren’t, according to Gratch, “really being human.” In fact, these rules surrounding professional conduct make it easier to program machines to do that sort of work, especially when Siri and Alexa are already collecting data on how people talk, such as their intonations and speech patterns. “There’s this digital trace you can treat as data,” he said, referring to the scripts on which customer-service reps rely, “and machines learn to mimic what people do in those tasks.”
These statistics complicate the notion that robots are inherently inferior when it comes to such tasks. But training and education programs seem to be focusing more on technical skills, like computer literacy ones, and that could make robots all the more grave a threat to employment. Eighty-five percent of respondents in the Pew report were in favor of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans—namely those in the construction sector, as well as those in realms such as agriculture and forestry. But people across sectors—not just those that rely on rote, physical labor—will see their jobs affected by technology given its ability to mimic human abilities.
Education and training can ease Americans’ worries. In a time of big data and LinkedIn, which has detailed information on which sectors of the economy are growing and shrinking, and which skills employers are looking for, training programs can help some people “upskill,” or learn additional skills critical to an evolving sector. Of course, training won’t be enough; the degree to which policymakers take seriously the threat of automation, and to what extent employers are forced to provide decent-paying jobs, mean that large-scale preparation is not guaranteed.