Peyri Herrera/Flickr

“The script for interacting with a human receptionist is cordial whereas the script for interacting with an information kiosk is utilitarian.” In a move that seems straight out of The Office (or Office Space, if you prefer), an employee at the New York City Health Department has been suspended for 20 days for answering the phone in a robot voice.

James Fanelli, at DNAinfo, reports that Ronald Dillon’s customers and bosses were none too pleased. One woman hung up on him, thinking he was actually a robot. Dillon claims he’s being bullied in the workplace, his bosses claim he’s being a poor sport. A judge ruled that his suspension is legal.

But what’s so wrong with talking to a robot on the phone?

It turns out that researchers in all kinds of fields are watching these interactions closely. From tech companies trying to design a better voice for their direction systems, to science fiction filmmakers trying to do a better job of humanizing their robot characters, there is all kinds of research out there about how humans react to robotic voices.

One study, done by Min Kyung Lee at Carnegie Mellon University, used the interaction log for the school’s hallway receptionist robot (yes, they actually have one of those) to examine the different ways people talked to a robotic presence. About half the people who used the Roboceptionist treated it like a human, and the other half treated it like an informational machine—much like a kiosk at the airport.

“One of the interesting things was that whether the users were greeting the robot at the beginning or not, was kind of the indicator of the kind of interaction they would carry out,” Lee told me. In other words, the best predictor for which kind of interaction people would have was whether they said hello to the robot or not. Those who did were polite, asked personal questions, and made small talk. Those who didn’t simply started out with direct commands like “time” or the name of the person they were looking for.

Humans move throughout the world following certain scripts, Lee says in her paper. There’s a specific script for dealing with humans, and a different one for dealing with machines. “The script for interacting with a human receptionist is cordial whereas the script for interacting with an information kiosk is utilitarian.”

One might expect the same to be true for Dillon’s calls. If the caller thinks they’re talking to a robot, they might get to the point faster and cut the small talk. This might indeed have been part of Dillon’s plan—according to Fanelli, Dillon told the judge in his case that he wasn’t a “people person.” What better way (aside from quitting) to avoid banter than by pretending to be a robot?

Other researchers have shown that the pitch of the robot’s voice matters too. In one study, scientists compared two robotic receptionists, one with a high-pitched, exuberant voice and the other with a lower-pitched, calm voice. Subjects vastly preferred the receptionist with the higher pitched voice. Another study found that users were more likely to prefer a robot that they thought sounded like the same gender they were.

Lee says that the woman who hung up on Dillon, thinking he was a robot, wasn’t necessarily wrong to do so. Today’s robots simply aren’t as good as humans are at listening and responding. “If I call about my credit card, I don’t want to talk to a robotic system,” Lee says. “I don’t know if it’s really a matter of people getting used to talking to robots, or whether it’s because the robot that’s available is not as good as a person in the real world.”

But there are robots out there that are shockingly convincing. Last year, a team at Newsweek spent hours trying to get the telemarketing robot going by the name of Samantha West to admit that she was, in fact, a robot. And researchers are hard at working designing robots that can reproduce vocal cues to convince humans to do things.

None of this matters to Dillon and his boss, of course, but as more and more systems become automated—from your phone’s handy helper, to your car’s GPS, to the cable company’s complaint system—customers find themselves interacting more and more with automated systems, narrated by “robots.” Often these systems aren’t truly robots, they’re simply pre-recorded phrases or sounds assembled into sentences in response to user inputs. But people still, in general, prefer to talk to people. And pretending to be a robot at work is still, in general, a bad idea.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.