With this new pathos-streaked, wanting-and-yearning version of Janet, how would the beach scene play out if attempted again? Would Janet still so blithely tell Chidi it’s okay if he kills her? Wouldn’t she feel actual fear, pain, and betrayal?
It’s a bit like the transformation that came over the immortal Michael when, in Season 2, he realized that there actually was a way for him to “die.” All of a sudden, he began considering ethics. And now, all of a sudden, Janet feels deserving of ethical consideration.
* * *
The Janet arc is familiar from sci-fi past and present. In Spike Jonze’s 2013 film Her, a nominally female personal-helper AI grows in strength and complexity over time as she processes information in the world. Eventually, she’s outpaced her human “boyfriend” and must, for her own fulfillment, move on from him. In the HBO show Westworld, the robotic entertainers of a futuristic theme park, killed and rebooted repeatedly over the course of decades, catch on to the sham world they’re living in—and develop a yearning for freedom.
These stories reflect a suspicion that a machine with ample processing power, programmed to learn from the tasks it’s given, will form something very similar to a human consciousness. Ray Kurzweil, the futurist who helped popularize the term the singularity, gave Her a favorable review for portraying how “a software program (an AI) can—will—be believably human and lovable.”
Popular fiction hasn’t always treated robots so kindly. Even setting aside the cautionary tales in which self-awareness breeds machine monsters—The Matrix or 2001—you have the Star Wars universe, in which, many a commentator has pointed out, droids are basically slaves. That they are bought and sold, denied entry into certain gathering places, and subject to deactivation at their owner’s whim isn’t presented as a moral issue at all. C-3PO’s existential terror is just a punchline. (The Disney sequels, notably, now flirt with robo-liberation: BB-8 ratchets up the cuddly, pet-like air of R2-D2—the original trilogy’s one dignified droid—and Rey’s only apparent motive for first rescuing him is compassion.)
With voice control, personalization, and other recent consumer tech leaps making our gadgets feel more friendly, C-3PO’s plight may begin to seem more unacceptable. It’s natural to wonder: Is an object that gains conscientiousness deserving of the same treatment as a person? Do they have an inviolable right to life and liberty? Does their dignity matter? Scientists and philosophers have mulled these questions for a long time, and a spate of journalistic inquiries in recent years have brought them further mainstream attention.
Some thinkers speculate that human consciousness arises from very specific, cell-level processes that simply aren’t endemic to machines—and thus consider the entire issue moot. Others point out more glaring differences between the organic and artificial. “A human being is a unique and irreplaceable individual with a finite lifespan,” the computer scientist Benjamin Kuipers told Discover. “Robots (and other AIs) are computational systems, and can be backed up, stored, retrieved, or duplicated, even into new hardware. A robot is neither unique nor irreplaceable.”