VichanChairat / Getty Images

One of the biggest tech stories of last week was Google’s demonstration of its newest AI service, Google Duplex, which held a conversation with a human while sounding just like a human itself—in a demo phone call, the software even interjected a casual “mhmm” into the conversation. The call revealed a highly sophisticated level of language processing. Google was showing off how far its AI technology had come, but it received backlash from critics who worried about the implications of machines that could pretend to be humans. In this issue, executive editor Matt Thompson reflects on what the veneer of humanity in machines says about our own humanity. Then I ask a couple of AI ethicists to review the ethics of developing human-like machines in the first place.

To access this story, become a member

Sign up for our brand-new membership program, The Masthead, and you’ll not only receive exclusive content you can’t find anywhere else—you’ll also help fund a sustainable future for journalism.

Find Out More

We want to hear what you think about this article. Submit a letter to the editor or write to