Almost as soon as it arrived as a concept, artificial intelligence has occupied a hefty portion of humans' technological anxieties. We worry about machines taking over our jobs (and/or our emotions, and/or our lives). Even as we appreciate the ease that AI has brought to our lives -- the commercial recommendations that recognize our desires, the language processing that understands our curiosities, the information indexing that satisfies them -- we have been conditioned to be suspicious of intelligence that doesn't come in the form most familiar to us: the folds of an organic brain.
But what happens 10 or 20 or 50 years down the road, when artificial intelligence has expanded its capabilities -- and, presumably, its role in our lives? What will that mean for humans, as a culture and as a species?
In the video above, PBS's Off Book series explores those questions. While humans have long turned to their tools to expand their capabilities, what will happen when those tools are themselves intelligent -- when those tools, perhaps, have consciousness and consciences of their own? "Once somebody develops a good AI program," NYU's Gary Marcus says, "it doesn't just replace one worker. It might replace millions of workers." And that, he continued, may bring another concern when it comes to our relationship with our notional robot overlords: "What happens if they decide that we're not useful anymore? I think we do need to think about how to build machines that are ethical. The smarter the machines get, the more important that is."
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.