VichanChairat / Getty Images

One of the biggest tech stories of last week was Google’s demonstration of its newest AI service, Google Duplex, which held a conversation with a human while sounding just like a human itself—in a demo phone call, the software even interjected a casual “mhmm” into the conversation. The call revealed a highly sophisticated level of language processing. Google was showing off how far its AI technology had come, but it received backlash from critics who worried about the implications of machines that could pretend to be humans. In this issue, executive editor Matt Thompson reflects on what the veneer of humanity in machines says about our own humanity. Then I ask a couple of AI ethicists to review the ethics of developing human-like machines in the first place.

—Karen Yuan


Living in the Uncanny Valley

By Matt Thompson

If you’re one of those humans who still sometimes uses the phone to speak, chances are you’ve had this experience. You receive—or make—a call, hear a “Hello,” and extend your own greeting in return. But the voice on the other end of the line keeps going, and you realize, probably with some dismay, that you’ve just spoken to a machine.

Soon, it could take much longer for that realization to sink in. Google is pioneering an alarmingly lifelike voice interface, "Google Duplex,” that can run minor telephone errands such as making restaurant reservations and hair appointments. Google CEO Sundar Pichai's demo of this software spawned thousands of horrified tweets, many imagining the madness this technology could wreak in the hands of an unethical telemarketer. By the end of the week, Google was promising that Duplex would reveal its robotic nature in practice.

We are all now surrounded by pretend humans; this is not a new revelation. Users of social media platforms have already spent a couple years getting used to the idea that some of their followers are probably robots. I tend to cope with this likelihood mostly by ignoring it. TwitterAudit.com tells me 97 percent of my followers are human. What does it matter if a wee percentage of the likes on my posts are mechanical?

But the vague layer of mistrust that now applies to interactions on Twitter is creeping into other places. I recently went on a vacation with some old friends I hadn’t seen in a while. We had started a group email thread to coordinate our arrivals, and I wanted to quickly reply to a message as I juggled my phone and luggage on the way out of the subway. The Gmail app on my Android phone auto-suggested a response that was close enough to what I wanted to say, so I took advantage of the handy shortcut. My friend found me out, emailing back, “‘Sounds good to me’ is clearly one of Google's ‘suggested replies.’” I had failed the Turing Test.

The idea of the “uncanny valley” began as a label for a visual phenomenon. An example: People enjoy humanoid robots that are somewhat anthropomorphized, but if the robots start to look too lifelike, they begin to seem grotesque. The subtle gaps between the robot's appearance and the appearance of an actual living being give the robot a monstrous quality. With the rapid spread of bots that mimic human behavior, that uncanny valley effect increasingly applies to everyday interactions. Is this email coming from a real person? Was that text message authentic or auto-suggested? Is that a human voice on the other end of the phone?

The writer Philip K. Dick anticipated this state of affairs, creating fiction about a world where the line between humans and androids had grown blurry. The plot of Blade Runner—the movie famously inspired by one of Dick’s short stories—is that the most human-seeming androids are so hated by society that one can earn a respectable living from sniffing them out and destroying them.

Among Dick’s most potent insights was that as robots grew more lifelike, the humans who made and used them would become more robotic. One of the most climactic periods of Dick’s life culminated in the author delivering a speech in Vancouver in February, 1972, titled, “The Android and the Human.” “I tried to define the real person, because there are people among us who are biologically human, but who are androids in the metaphoric sense,” Dick said in an interview, describing the aims of the speech. “I wanted to draw the line so I could define the positive primary goal of stipulating what was human. Computers are becoming more and more like sensitive cogitative creatures, but at the same time, human beings are becoming dehumanized. As I wrote the speech, I sensed in it the need for people who were human to reinforce other people’s humanness. And because of this, it would be necessary to rebel against an inhuman or android society.”

The backlash against Google Duplex centered on the fact that Google had created a machine that disguised itself as human. Criticism of the software has naturally focused on the ethics of its design. “Silicon Valley is ethically lost, rudderless, and has not learned a thing,” said the author and scholar Zeynep Tufekci in a widely-circulated tweet. At Techcrunch, Natasha Lomas wrote about the persistent ethical gaps in the artificial intelligence research community, gaps made blatant by Pichai’s demo. My colleague Alexis Madrigal highlighted the likelihood that Google Duplex could further dehumanize service workers, whose jobs have grown increasingly rote, mechanical, and at risk of automation. (“Finally, technological capitalism has generated the correct match for the robotic service worker,” Alexis wrote. “A robot service worker.”)

But the danger that worried Dick most was not merely that the machine would deceive us, but that it would seduce us. Whether or not Google adjusts its software to clearly identify itself as a bot when it interacts with humans, tools like Google Duplex will increasingly be available to us, and we will each face the question of how much of ourselves we’re willing to mechanize, how much social friction we’re willing to smooth away. How human, after all, are we willing to be?


The Ethics of Artificial Intelligence

By Karen Yuan

I spoke to Kay Firth-Butterfield, head of AI and Machine Learning at the World Economic Forum, and Virginia Dignum, professor of technology at Delft University of Technology. My questions are in bold.

The biggest ethical criticism of Google Duplex is its deception—whomever it calls has no idea that it’s a machine. Why might this be unethical?

Firth-Butterfield: Let’s think about how this tech could be deployed in political campaigns. This machine phones you, and tells you a candidate is absolutely fantastic. We know that a person’s recommendation about a politician is so much more persuasive than a robot’s. This could be seen as a tipping point where more is at play than just this device: Do we want a society in which machines pretend to be human, or are there any lines to be drawn?

It’s important for us to understand that this is a machine which may or may not be recording us and the data that we give it during a conversation. There are already eavesdropping laws in some states, but there’s a wider issue than the legal piece. It’s about whether our privacy is being invaded. At the moment, Duplex is making innocuous calls, but this is the start of something that makes us sit up and say, we need to think about how to regulate this technology.

So do you recommend ethical regulations on tech like Duplex?

Firth-Butterfield: I don’t think regulation is the way we should be going, because regulation is far too slow. We want to come up with guidelines and principles, and more agile forms of governance. At WEF, we’re about to start running a project with a government in Europe that will create best-practice guidelines for the procurement of any AI by that government.

The benefit of government oversight is that, if a government expects a standard, tech companies will be more likely to meet that standard. What we’ve seen with Duplex is an interaction between a company and an individual. But actually there are three people in the discussion—the third person is the government, because it has a contract with the individual to keep them safe.

Dignum: I think the biggest issue is education for technologists. As I read about Duplex, it seemed like none of the engineers involved in developing the tech were aware of the ethical implications. There are online courses that they can follow. Or these companies can conduct in-house ethics training required for all engineers. The ethical issue wasn’t something that was considered in the design of the technology. No one said, let’s stop for five minutes and think.

When they do stop for five minutes, what questions should tech companies ask?

Dignum: Accountability, responsibility, and transparency. Google would have to set Duplex against these three principles in its design. During the hairdresser appointment, what if the machine decided to make an appointment without the user’s request? Then it would need to be accountable—if a version of it just started buying flight tickets, for example. What if it caused damage to something? It would need to [take] responsibility. We should also know that it’s a machine when it’s speaking to us. That’s transparency at the very minimum.

Note: Google’s Legal and Product Policy teams review the design and development of new technologies. A Google spokesperson told me, "Transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product." In the demo call, Duplex didn’t disclose its nature.


Today’s Wrap Up

  • Today’s question: During your interactions with AI, what ethical questions have come up? (One parent, writing for The Atlantic, worried that Alexa was teaching his son to be rude.)

  • What’s coming: On Wednesday, Atlantic designers will share how they created the cover of the June issue.

  • Your feedback: Click the button below to tell us what you think.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.