How do courts know whether a new technology is just an improved version of something they've already seen, or something else entirely?
Some technological changes are small: There's not a world of difference between a rotary-dial and a keypad-dial phone. Others, such as cell phones, are huge, reshaping how we communicate, plan, and organize our lives.
The smaller changes, such as the shift from rotary to keypads, are no more than improvements to our existing capabilities. Some of these improvements are important -- say, a medicine with fewer side effects -- but they do not fundamentally reshape our culture. They are changes in degree.
But the bigger technological leaps are changes in kind. They are not just an enhancement of an old technology but something new unto themselves.
The difference is not always clear. Are smartphones just better cell phones, or are their capabilities so different as to make them a new thing?
Are smartphones just better cell phones, or are their capabilities so different as to make them a new thing?
For the most part, it's fine if we don't have a clear answer to that question. Smartphones may be just better cell phones in some ways, and may be something new in other ways.
But there's one area where teasing out whether something is a change in degree or a change in kind can have real-life ramifications: law. Judges must constantly grapple with the question of how we ought to apply old law to new technology: Is the new technology enough like an old technology that the old precedents apply? Or, is a technology so different that it requires new legal interpretations?
For examples, does the right to bear arms extend to a machine gun? What about a rocket launcher? If law enforcement can seize my handwritten mail from a third party, can it therefore take my email from Google? Is email just a better version of mail (change in degree) or is it a new capability (change in kind)?
Of all the areas of Constitutional law, it is the Fourth Amendment that has dealt with the brunt of technological change (at least for now -- more on that later). Historically, the amendment limited the government's ability to enter a home (search) and take property (seize). The amendment did not imagine an era when we have cars, mobile devices, and communicate over wires.
A case now before the Supreme Court asks the justices to tease once again out the logical corners of fourth-amendment law in relation to a new technology -- in this instance to decide whether it is constitutional to use GPS to track a suspect's car without a warrant. In the case, the government maintains that since streets are public, it has a right to monitor someone's whereabouts when that person is in public. (Under fourth-amendment law, people in public places have much less protection than people in their private homes.)
The police have a long-established right to observe citizens, with plain eyesight or technology such as binoculars, when those citizens are out and about in public. Based on that principle, in 1983 the Court said that law enforcement could also use beepers to track a suspect's car, because beepers, like binoculars would give police a picture of the suspect's public whereabouts, something the police were entitled to see: "A police car following [the suspect] at a distance throughout his journey could have observed him." Since the beeper did not allow anything that was not already possible, the beeper got the green light.
And with that, the Court opened the door to the current challenge.
Is a GPS like a beeper? Yes and no, but it's a lot easier to see how a GPS is like a beeper than it is a pair of binoculars.
Is a GPS like a beeper? Yes and no, but it's a lot easier to see how a GPS is like a beeper than it is a pair of binoculars. In this way, beepers act as a bridge technology, one that allows for what I'm going to call "argument creep" -- the process by which an argument is applied to situations it never accounted for, through gradual changes in technologies it never foresaw. Because the justices couldn't see the future, they compared beeper technology to what they knew, not what it could one day lead to. (In contrast, in another fourth-amendment case -- this one about using heat-seeking equipment to determine whether a suspect was growing marijuana inside his home -- Justice Antonin Scalia cautioned that the Court should take the "long view," accounting not only for the technology at hand, but ones that may be like it down the road, such as something that allowed police to see directly through walls.)
Because of argument creep, it's important to be careful about relying on analogies when examining a technological change. Instead of simply applying old standards relevant to old technologies to new situations, the courts must reevaluate each technology, not only in terms of what it does at a basic level, but also -- and more important -- in terms of how society feels about it.
Now, that can seem like a pretty squishy standard for judicial decision-making, but the question of how society feels is explicitly embedded in the Fourth Amendment in the word "unreasonable." This word, at least in part, asks for society's judgment about a government's action; it asks how society feels.
The D.C. Circuit Court of Appeals understood this and intuited, in the Jones case, that society does not feel the same way about GPS as it did about beepers (to say nothing of binoculars). When that lower court reviewed Jones's case, it ruled (pdf):
Society recognizes Jones's expectation of privacy in his movements over the course of a month as reasonable, and the use of the GPS device to monitor those movements defeated that reasonable expectation. As we have discussed, prolonged GPS monitoring reveals an intimate picture of the subject's life that he expects no one to have -- short perhaps of his spouse.
In other words, it took into account the subjective societal standard of "reasonable" and determined that GPS monitoring did not live up to Constitutional snuff.
The challenge is that this sort of judgment -- the ability to see, somehow, the societal attitude about a particular surveillance technique -- does not come easy to the courts, in part because it does not come easy to society as a whole. Technological changes can quicken society's pulse, inspiring both fearful denunciations (it will ruin the children!) and utopian visions. The implications of new technologies are usually not clear from the outset, and it can take a while for the dust to settle before a Court can reasonably step in to make an assessment.
Perhaps the most vivid example of this confusion is the question of what (or who) will count as a legal person -- and therefore entitled to due process and equal protection rights under the 14th amendment -- in the future, an issue explored in a provocative essay by James Boyle. Boyle writes:
My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. ... They may be physically dissimilar to all biological life forms -- computer-based intelligences, for example -- yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination.
The answer may seem obvious -- how could you confuse a robot for a human? But then pause to consider that when the Constitution was written, slaves did not have the full rights of citizens. Neither did women. We continue to hotly debate whether a fetus is a person. The definition of a person is not exactly stable. In the coming years, we may find ourselves re-evaluating this fundamental idea, one that underlies our entire legal system, in light of technological advances from the fields of robotics and synthetic biology.
This points to a further difficulty in applying Constitutional law to new technologies: deciding that a technology is a change in kind and requires new legal judgment is only the first step. Perhaps judges will have no trouble saying that Turing-certified AI is not a human, and a GPS is not a beeper, and emails are not mail. What then? Will the robot have some rights, if not the status of a full person? What principles will judges be able to draw on for determining the contours of future fourth-amendment challenges? The results may not be clean and clear, but change -- both technological and legal -- rarely is.
This article available online at: