Courts have pondered the extent to which a robot can resemble a living person, as in a 1993 lawsuit filed by Wheel of Fortune star letter turner Vanna White claiming that a robot lookalike used in a Samsung ad campaign had violated her right to publicity (the trial court said no, an appeals court ruled yes). They’ve considered whether robots can be considered performers for the purposes of levying an entertainment tax (no, at least not the case of the animatronic animals that alternately entertain and frighten children at Chuck E. Cheese restaurants).
So far, courts have mostly treated robots as mindless machines and held humans responsible for their actions. What’s changing now, Calo says, is that robots are becoming more capable of acting and thinking for themselves. “What’s exciting about robotics today, in part, is that they’re able to solve problems in ways people wouldn’t, and that’s not something courts have encountered or even imagined,” he says.
In the Columbus-America case, for example, it’s not clear that an autonomous robot—like one that executes a search pattern of its own design—would meet the criteria for telepossession set out by the 1989 ruling. There, the court emphasized the role of a human operator in directly controlling the robot’s movements. But these days autonomous submersibles patrol the oceans on behalf of research institutions, navies, and private companies. One company, Liquid Robotics, boasts that its bots have logged more than a million miles collecting data for defense, oil and gas industry, and other clients.
Then there’s outer space. In November, President Obama signed a bill intended to promote space exploration by private companies, including ones interested in mining asteroids for minerals. That mining would almost certainly be done by robots, Calo says, and it’s not hard to imagine competing claims. In the future, space robot lawyer might be an actual job title.
In the meantime, Calo and others predict that the most interesting cases to confront the courts will involve robots with “emergent” behavior—that is, robots capable of solving problems and behaving in surprising ways. Such bots could complicate the concept of criminal intent, a crucial determination in criminal cases.
An incident last year hints at the kinds of cases that could come up. Police in Amsterdam investigated a web developer named Jeffry van der Goot who created a Twitter bot that tweeted an apparent death threat directed at a local fashion show. The bot was an algorithm that remixed random phrases from van der Goot’s personal Twitter account. Van der Goot insisted he hadn’t meant to threaten anyone and hadn’t anticipated that the bot would do so. No charges were filed, but he disabled the bot at the cops’ request.
The issue of anticipating what AI can do could also make it tricky to determine liability in civil cases, Calo says. He cites a classic law-school case study involving a mink farmer who sued a nearby mill company, claiming that the company’s use of explosives to clear a roadway had stressed his minks so badly that they’d eaten their young. The court ruled that stressed-out minks are beyond the foreseeable consequences of blasting and denied the claim for damages.