“You just can’t differentiate between a robot and the very best of humans,” argues Dr. Lanning, director of U.S. Robots lab in Isaac Asimov’s I, Robot.
The day my husband and I brought our robot home, we spent hours watching it. As the it whirled and chugged around our living room furniture, we were glued to our seats—staring at the little guy as he mopped his way up one side of the room and down the other. We even rearranged the furniture as a test, curious to see how the four pounds of algorithmic cleaning genius would navigate around a maze of kitchen chairs.
What we didn’t expect, however, was the depth to which we would try and make our new iRobot Braava human, or at least human-ish, by givilng it identifiable characteristics. It wasn’t so much that we had simply bought a piece of technology, it was like we had adopted a “something” and that that “something” had a relationship with us.
We christened him Isaac (yes, seriously), and his personality began to quickly take shape as we imbued his actions: Isaac was a chipper and earnest worker. We even talk about him like a living thing—a pet? a small child?—and create explanations to account for his behavior. (“He gets pissy trying to clean under the bookshelf.” “He has a hard time getting to the far side of the room when he starts over here.” “Wow, he really likes the wide, open areas.”)
We never anticipated how reciprocal our behavior would become either—how much we would shape each other. Isaac chirps when he’s finished a task and whines in frustration when he’s stuck. He plays a jaunty little tune when he’s finished mopping a room. We’re attuned to these sounds now, like we are to each others' voices.
Although the directions for the Braava recommend moving obvious obstacles out of the cleaning path, we go one step farther. Every time we have Isaac clean a room, we move the room’s furniture completely, shoving chairs and the sofa to one side and rolling up rugs, to give him long, easy passes through the room. We block any floor-level bookshelf opening where he might get pinned. This game of furniture tetris makes his cleaning job easier. And these actions are cyclic—the more we train him to clean the room efficiently, the more he trains us to the ways that let him do that most efficiently. This pattern of iterative behavior ends up translating into something much larger; it points to the underlying anthropic condition of human-robot interactions.
There are certain types of robots that we can and do anthropomorphize better than others; robots that don’t have outward behavior are much harder to personify than those who do. If it doesn’t “do” anything visible, it appears to not make choices. We can easily humanize something that we believe has agency, but technological objects that don’t exhibit outward signs of choice can’t occupy the same relationship niche with us as those that do.
About the same time we got Isaac, we also purchased a Nest and a Nest Protect. The Nest quietly and stoically takes data and optimizes a temperature range for our domicile; it sits on the wall, barnacle-like, and occasionally texts us about our energy use. (“This month, the average Nest Thermostat owner in your area earned 15 Leafs.”) The Nest Protect has been more than adept at alerting me to cooking mishaps. (“There is smoke in the hallway. There is smoke in the hallway.”) But once both were installed, we basically forgot about them. Neither of these devices have the behavioral cachet to be a “successfully” personification. Both are too passive to really ascribe human-like behavior. They simply don’t require the same attention as other, more mobile, types of robots like Isaac.
The most telling aspect of anthropomorphizing robots seems to come from their motion. A robot’s movement is an easy proxy for action, and action, in turn, is a great a proxy for agency. Although we push Isaac’s power button to start him up, we don’t drive him around like a remote-controlled car. He “chooses” where to go and then moves accordingly. Humans desperately want to assign agency to something that moves, seemingly free from our auspices; as such we shape our reactions for what Isaac “is” or what he “does” around our desire for these expectations of behavior. It’s almost as if we’re carving out a link and a space for Isaac in the Great Chain of Being.
On a fundamental, anthropological level, there has always been a deep-seated need to categorize and explain behavior. Even if we “know” that the behavior of something like Isaac is, well, robotic—based on a simple algorithm designed by the good folks at iRobot—we still assign human-like characteristics to his actions. Anthropomorphizing robot behavior provides a comfortable enough distance to make and re-make the robot as we best see fit. By making the robot more like us, we can interpret its behavior in a way that’s most convenient to our self-centric psyches. (Of course he likes to clean the floor! Of course he appreciates us making his job easier!)
What would Isaac think of this? How would he see these interactions? Where would he put himself in the Chain? Where we put him? Or would he whirr in agreement with the robot Cutie in I, Robot, who argued that robots have completely replaced humans in existential purpose? That people are antiquated life-forms without the reason and prowess of robots?
Like Cutie, our Braava is an object that makes us think about what makes humans human—is it shape? Personality? Life-history? The device becomes a mirror that we hold up to consider how we think about agency, object-ness, and conciseness. How we interact with Isaac tells us more about ourselves and our strange relationship technology than Isaac—who is, just to remind you, a motorized mop—tells us about technology itself.
When Isaac joined our family, we found that there was a deep, powerful underlying psychology behind human-robot interactions. It turns out, Asimov was right all along. You just can’t differentiate between a robot and the very best of humans. And Isaac makes us consider that every time he cleans.