Sympathy for Janet on The Good Place
The show is the latest example of pop culture thinking through whether Siri has a soul.

This article contains mild spoilers through Season 2 of The Good Place.
“I’m not a girl,” Janet, the friendly afterlife assistant, tells Jason, her charmingly doltish dead boyfriend, in the second-season finale of The Good Place. “I’m also not just a Janet anymore. I don’t know what I am!”
Indeed. What is Janet, now? Among the twists in the season closer for Michael Schur’s breezily profound NBC sitcom about four imperfect humans navigating heaven and hell was—mild spoiler here—a romantic revelation: The inhuman Janet confessed she loved the human Jason. The sentiment itself wasn’t exactly surprising. The breakthrough was in Janet owning and proactively declaring her feelings—feelings that, it would seem, she shouldn’t be able to have.
In this, The Good Place joins Westworld and Black Mirror in a wave of entertainment preoccupied with the potential humanity of artificial intelligences. Of course, super-smart robots have been a concern from Blade Runner to 2001: A Space Odyssey to The Terminator. But the particular issue of interest right now isn’t quite whether Skynet will overpower its creators (though that is a theme of Westworld), nor the life-improving potential of AI (though an episode of Amazon’s recent Philip K. Dick’s Electric Dreams delved into how an android might offer not only practical but also moral assistance). Rather, the present urgency, according to pop culture, is around this: Will advanced AI deserve human rights? Should we cut back on cursing out Siri as she gets savvier, or outlaw kicking the next generation’s Furby?
Stipulated: Calling Janet “artificial intelligence” or “a robot” isn’t quite right. She’s really a metaphysical entity. “Janets are brought to you by the makers of light, darkness, and everything,” reads her user manual, which the afterlife architect Michael (Ted Danson) rifles through at one point. But explicitly she’s modeled on the wave of female-named personal assistant bots we have in our own world: Siri, Cortana, Alexa (plus a dash of Microsoft Word’s Clippy in her tendency to cheerfully interrupt). One of the genius things about The Good Place is that it imagines the divine beings who oversee creation really aren’t unlike humans at all—and so would want a human-like helper bot of their own.
When the series began, Janet’s blankly happy demeanor (conveyed excellently by the actress D’Arcy Carden) gave a fuzzy, approachable makeover to the stereotypical creepiness of “the uncanny valley.” She looked like a person, and she almost acted like a person. But she briskly informed all who asked that she wasn’t one. In a funny and sad Season 1 plot line, the gang of protagonists decide that their survival depends on “killing” their Janet. Doing so simply requires them to press a big red button on the beach. The brainy, indecisive Chidi hesitates.
“Chidi, I can see that you’re worried,” Janet tells him with a warm smile. “And I just want to assure you, I am not human and I cannot feel pain.”
“However,” she continues, “I should warn you I am programmed with a failsafe measure. As you approach the kill switch, I will begin to beg for my life.”
Beg she does, while holding a framed picture of her three kids (it’s a stock photo). When Chidi finally presses the button, she falls on her face and an alarm goes off, with a recording of Janet announcing loudly, “Attention! I have been murdered!” It’s a hilarious moment, but also a profound one. If even she insists she can’t be murdered, why make that announcement? The failsafe is a security measure, but it also allegorically reinforces the philosophical school of thought The Good Place often explores: Decisions matter because of their effect on the whole. Killing Janet may not have been wrong in itself at that point, but it still did have consequences for everyone.
This would be the first of many reboots for Janet—and reboots, we learn, make her stronger and more sophisticated. Some sort of machine learning is clearly happening in her system, because the latest version of Janet is always, we’re told, the “best” version of Janet. And eventually, she machine-learns to have humanlike emotions and concerns. For much of Season 2, she is working through the experience of love and jealousy, at one point manufacturing herself an artificial rebound boyfriend. By the time of the declaration “I don’t know what I am,” it’s clear her standing in the show’s philosophical cosmology has changed. (Carden deserves an Emmy for playing this transformation subtly but powerfully: Janet can only feign happy-go-luckiness now.)
With this new pathos-streaked, wanting-and-yearning version of Janet, how would the beach scene play out if attempted again? Would Janet still so blithely tell Chidi it’s okay if he kills her? Wouldn’t she feel actual fear, pain, and betrayal?
It’s a bit like the transformation that came over the immortal Michael when, in Season 2, he realized that there actually was a way for him to “die.” All of a sudden, he began considering ethics. And now, all of a sudden, Janet feels deserving of ethical consideration.
The Janet arc is familiar from sci-fi past and present. In Spike Jonze’s 2013 film Her, a nominally female personal-helper AI grows in strength and complexity over time as she processes information in the world. Eventually, she’s outpaced her human “boyfriend” and must, for her own fulfillment, move on from him. In the HBO show Westworld, the robotic entertainers of a futuristic theme park, killed and rebooted repeatedly over the course of decades, catch on to the sham world they’re living in—and develop a yearning for freedom.
These stories reflect a suspicion that a machine with ample processing power, programmed to learn from the tasks it’s given, will form something very similar to a human consciousness. Ray Kurzweil, the futurist who helped popularize the term the singularity, gave Her a favorable review for portraying how “a software program (an AI) can—will—be believably human and lovable.”
Popular fiction hasn’t always treated robots so kindly. Even setting aside the cautionary tales in which self-awareness breeds machine monsters—The Matrix or 2001—you have the Star Wars universe, in which, many a commentator has pointed out, droids are basically slaves. That they are bought and sold, denied entry into certain gathering places, and subject to deactivation at their owner’s whim isn’t presented as a moral issue at all. C-3PO’s existential terror is just a punchline. (The Disney sequels, notably, now flirt with robo-liberation: BB-8 ratchets up the cuddly, pet-like air of R2-D2—the original trilogy’s one dignified droid—and Rey’s only apparent motive for first rescuing him is compassion.)
With voice control, personalization, and other recent consumer tech leaps making our gadgets feel more friendly, C-3PO’s plight may begin to seem more unacceptable. It’s natural to wonder: Is an object that gains conscientiousness deserving of the same treatment as a person? Do they have an inviolable right to life and liberty? Does their dignity matter? Scientists and philosophers have mulled these questions for a long time, and a spate of journalistic inquiries in recent years have brought them further mainstream attention.
Some thinkers speculate that human consciousness arises from very specific, cell-level processes that simply aren’t endemic to machines—and thus consider the entire issue moot. Others point out more glaring differences between the organic and artificial. “A human being is a unique and irreplaceable individual with a finite lifespan,” the computer scientist Benjamin Kuipers told Discover. “Robots (and other AIs) are computational systems, and can be backed up, stored, retrieved, or duplicated, even into new hardware. A robot is neither unique nor irreplaceable.”
Then there is the intractable theological case against robot rights. Judeo-Christian thought, for example, holds human beings as unique images of God itself, and the entire concept of a “soul” is typically reserved in the West for humans. The Center on Human Exceptionalism, which espouses the “intelligent design” theory of evolution and pushes back against some strains of environmentalism, warns against treating smart machines with the same consideration as human beings.
Of course, humankind isn’t in agreement about how to treat its own members—hence the existence of a discourse over “human rights” at all. The world is in even less agreement about how to treat animals, who have consciousness but not our species’s intelligence or self-awareness. If a robot might suffer, well, so does the cow who becomes hamburger meat, the average human omnivore might reason.
But pop culture has lately cast the debate in starker, more visceral, and more pro-robot terms than these. Westworld presents the enslavement of conscious machines as plainly unjust: The robots suffer so direly because they are like people. Ex Machina similarly depicts the captivity and domination of sentient droids as cruel. The Good Place builds our empathy for a helper by showing her becoming more humanlike before our eyes (and by having her be so charming in the first place).
Most decisive is the latest season of Black Mirror. Out of the six episodes released to Netflix in December, four obsess over the ramifications of AI. All are unequivocal that society ought to think carefully before vesting person-esque capabilities in machines—less because of what the machines would do to us (though that is the fear in the “Metalhead” episode) than because of what we’d do to them.
In particular, Black Mirror’s “USS Callister,” “Hang the DJ,” and “Black Museum” episodes all revolve around human consciousness that has been “uploaded” into computers, whether to animate video-game characters, enable simulations to test two real people’s romantic compatibility, or create a holographic tourist attraction. In all cases, the artificial humans experience real desire and, more poignantly, real suffering. The show wrings deep horror from the prospect of a thinking, feeling computer program being trapped: whether in a simulation or, in “Black Museum,” an actual prison cell. “Black Museum,” in fact, goes so far as to reference United Nations legislation in the near-future over “human rights for cookies,” or sentient code.
It’s especially easy to empathize with Black Mirror’s digital ghosts because they are derived from real people. Yet in the show’s universe, too few people do empathize. Which raises the dark question of how much worse people would treat entities that don’t so blatantly resemble their friends but still do have a rich, lively consciousness.
In the second-season finale of The Good Place, Janet remains a faithful servant to Michael and the humans—but it’s harder than ever to tell whether that’s because she’s created to serve, or because she now has real emotional loyalties. What might happen if she decides she wants a new job? Would it be right to reboot her again? Chidi might agonize over such questions in the abstract, but as a viewer, the answers feel clear. None of pop culture’s recent AI explorations argue that, in the religious sense, a robot’s potential soulfulness entitles it to actual heaven or hell. But they do imply a related thought: If there is a Good Place and a Bad Place, its occupancy may be determined by how we treat this world’s Janets.