A relatively unsung virtue of the U.S. Patent and Trademark Office is that its databases can be viewed collectively as a sort of cultural seismograph, registering interesting spikes of entrepreneurial enthusiasm. A patent application filed on January 10, 1995, is part of one such spike. Issued as U.S. Patent 5,629,678 ("Personal tracking and recovery system"), the patent is summed up in an abstract that begins,
Apparatus for tracking and recovering humans utilizes an implantable transceiver incorporating a power supply and actuation system allowing the unit to remain implanted and functional for years without maintenance. The implanted transmitter may be remotely actuated, or actuated by the implantee. Power for the remote-activated receiver is generated electromechanically through the movement of body muscle. The device is small enough to be implanted in a child.
Until recently such an idea might have seemed better suited to science fiction or political allegory than to real life. But in December of 1999 the patent was acquired by a Florida-based company named Applied Digital Solutions, and it is now the basis of an identity-verification and remote-monitoring system that ADS calls Digital Angel. "We believe the potential global market for this device," ADS announces on its Web site, "could exceed $100 billion."
New surveillance and information-gathering technologies are everywhere these days, and they're setting off all sorts of alarm bells for those who worry about the erosion of privacy. The result has been a clangor of dire predictions. Books have recently appeared with such titles as Database Nation: The Death of Privacy in the 21st Century (by Simson Garfinkel), The Unwanted Gaze: The Destruction of Privacy in America (by Jeffrey Rosen), and The End of Privacy: How Total Surveillance Is Becoming a Reality (by Reg Whitaker). Polls suggest that the public is gravely concerned: a 1999 Wall Street Journal-NBC survey, for instance, indicated that privacy is the issue that concerns Americans most about the twenty-first century, ahead of overpopulation, racial tensions, and global warming. Politicians can't talk enough about privacy, and are rushing to pass laws to protect it. Increasingly, business and technology are seen as the culprits. "Over the next 50 years," the journalist Simson Garfinkel writes in Database Nation, "we will see new kinds of threats to privacy that don't find their roots in totalitarianism, but in capitalism, the free market, advanced technology, and the unbridled exchange of electronic information."
Steven Levy, the author of Crypto: When the Code Rebels Beat the Government—Saving Privacy in the Digital Age, exchanges e-mail with Toby Lester about the impact of cryptography on our daily lives.
There's a general sense, too, that businesses in the modern free market are indifferent to the threats their new technologies pose to privacy. That sense seemed powerfully confirmed in early 1999, when Scott McNealy, the chief executive officer of Sun Microsystems, was asked whether privacy safeguards had been built into a new computer-networking system that Sun had just released. McNealy responded that consumer-privacy issues were nothing but a "red herring," and went on to make a remark that still resonates. "You have zero privacy anyway," he snapped. "Get over it."
But something very interesting is happening: the market for goods and services that protect privacy is surging. Entrepreneurs are realizing that privacy and technology are not fundamentally at odds—and that, in fact, expectations of privacy have in large measure always been created or broadened by the arrival of new technologies. People are coming to accept the notion that the protection of privacy is a pervasive and lasting concern in the computer age—and that, indeed, it may turn out to be the true enabler of the information economy.
Companies old and new are getting into the business. The number of newly registered privacy-related trademarks and patents has risen dramatically in the past few years; they include everything from banking services and computer technologies to window treatments and even an independent software agent ("for protecting consumers' privacy") called Privacy Just Got Cool. Anonymous Web-browsing and e-mailing services are available from companies called Anonymizer, Hushmail, IDcide, PrivacyX, and ZipLip. An outfit called Disappearing has developed an e-mail system that allows users to send messages that permanently unwrite themselves after a previously specified amount of time. Sales of personal paper shredders are up. Personal bodyguards are increasingly in demand. American Express has just unveiled a system called Private Payments, which generates a random, unique card number for each online purchase. A California law firm now offers to prepare something it calls The Privacy Trust, which, it claims, "successfully conceals ownership of bank and brokerage accounts, the family home, rental properties, and interests in other entities." Money may soon begin to be "minted" solely in electronic form, creating "digital cash" that could make credit cards (and the data gathering they make possible) obsolete. There is serious talk of building privacy protection into the infrastructure of the Internet, and of using such protection, paradoxically, to make the flow of information freer than ever before.
Billions of dollars are at stake. A new sector of the economy seems to be coming into being. Among entrepreneurs and venture capitalists it already has a name. It's known as the privacy space.
The Decade of Tracking and Monitoring
The privacy debate is, essentially, a debate about the control of personal information. What's unsettling about Digital Angel, for example, is not that the remote electromechanical monitoring of a human being is possible. In fact, it's easy to see the potential benefits of such a technology: doctors and hospitals could use it to keep an unobtrusive twenty-four-hour watch on patients at home; military commanders could use it to monitor the exact locations of soldiers in battle. What is unsettling to a lot of people is the idea that personal data—in this case, one's very life signs—might be converted into information that could be exchanged, bought, or sold for secondary use without one's knowledge or consent. Conceivably, for instance, insurers or drug companies might pay a lot of money for access to the very specific information in hospitals' Digital Angel databases.
These examples are hypothetical, but the issue most certainly is not: there are plenty of ways in which personal data is already gathered and exchanged for secondary use. People give away vast amounts of valuable information about themselves, wittingly or unwittingly, by using credit cards, signing up for supermarket discount programs, joining frequent-flyer clubs, sending e-mail, browsing on the Internet, using electronic tollbooth passes, mailing in rebate forms, entering sweepstakes, and calling toll-free numbers. Such behaviors are essentially voluntary (although a somewhat abstract case can be made that they are the product of what has been called "the tyranny of convenience"), but many other ways of participating in everyday life basically require the divulging of information about oneself. A person can't function in American society without regularly using a Social Security number, which has become a de facto national ID number—and which, as such, is the key to all sorts of private information. If one needs a mortgage, as almost everybody buying a home does, one has to turn over pages of detailed background data, some of which banks can then sell to whomever they like. People who buy prescription drugs now leave a trail of highly sensitive (and therefore valuable) personal information that is often gathered up and sold. The proliferation of surveillance cameras in public places means that one's comings and goings are increasingly a matter of public record.
The now very familiar reaction to all of this was recently reprised for me by the privacy activist Richard M. Smith, who has made a name for himself by exposing false or misleading claims made by companies about their privacy practices. "This coming decade is going to be known as the decade of tracking and monitoring," I was told by Smith, who recently became the chief technology officer of a watchdog organization called the Privacy Foundation. "Technologies are going to come online to monitor us in ways we would never have imagined ten years ago. It's going to be with us throughout our lives. The past five years on the Internet have been the prototype of what's going to happen in the offline world. Cell phones. Smart cards. Digital TV. Biometrics. It's happening. There are going to be millions of things tracking us that we've never even dreamed of."
It's a complicated equation, of course. "The same technologies that have raised concerns about a 'surveillance society' have historically made possible many benefits that most citizens would prefer not to surrender," Phil Agre, an associate professor of information studies at the University of California at Los Angeles, has written, in Technology and Privacy: The New Landscape (1997), a thought-provoking collection of essays edited by Agre and the privacy advocate Marc Rotenberg. Even Alan Greenspan, the chairman of the Federal Reserve Board, has weighed in on the topic. In a 1998 letter to Congressman Edward J. Markey, Greenspan wrote,
The appropriate balancing of the increasing need for information in guiding our economy to ever higher standards of living, and essential need of protection of individual privacy in such an environment, will confront public policy with one of its most sensitive tradeoffs in the years immediately ahead.
The gloomy assessment of that tradeoff today is that privacy concerns are losing out, and that something needs to be done about the problem right now, before patterns are established and built into the infrastructure of the economy. (In some respects this argument is made for the benefit of future generations, because voluminous information about people alive today has already seeped out into the public domain.) The national mood has led to a flurry of privacy-related activity in Congress. Pending Senate bills include the Consumer Privacy Protection Act, the Privacy and Identity Protection Act of 2000, the Notice of Electronic Monitoring Act, the Consumer Internet Privacy Enhancement Act, the Secure Online Communication Enforcement Act of 2000, and the Freedom From Behavioral Profiling Act of 2000.
Not everybody, however, has faith in the government's ability to legislate control of—or even to understand—an issue as complicated and as rapidly changing as privacy in the information age. American industry has therefore come out in favor of self-regulation—assuming that businesses, in response to a form of peer pressure, will individually and collectively develop reasonable methods for protecting privacy. (To date the most visible results of this approach are the fairly easy-to-find privacy policies posted on company Web sites.)
The relative merits of legislation and self-regulation are fiercely debated, and will no doubt continue to be so for some time. But this story is not about that debate. Rather, it is about the fact that many businesses view the coming several years—the period during which the debate will probably play itself out—as an opportunity to seize lucrative leadership in the privacy space.
"An Emerging Business Imperative"
What so many businesses don't get," Ann Cavoukian, the information and privacy commissioner of Ontario, Canada, told me not long ago, "is that you shouldn't be having an adversarial relationship with privacy. Privacy is good for business. I've argued this from day one. If you're in the information business today, you've got to lead with privacy."
We were sitting in Cavoukian's office, on the seventeenth floor of a high-rise in midtown Toronto, chatting and nibbling chocolate-covered biscuits. The room was huge, immaculate, and tastefully appointed in the somewhat generic way that the offices of important government officials often are. We sat next to a coffee table, on tightly upholstered furniture; CNN flickered silently on a television in the background. A wall of windows provided a commanding view of the city.
I had sought out Cavoukian because I had just read the book she wrote with Don Tapscott, Who Knows: Safeguarding Your Privacy in a Networked World (1997), and had been impressed by its pragmatic approach. One sentence in particular had struck me: "Protection of privacy is not just a moral or social issue; it is also an emerging business imperative." This ran counter to most of what I had read, and I wanted to hear more.
Cavoukian—an energetic woman of Armenian descent, who happens to be the sister of the children's songwriter Raffi—radiates enthusiasm, especially when the topic is privacy. This is as it should be: her job, as commissioner, is to educate the public about privacy matters and to ensure that all government agencies in Ontario abide by the province's freedom-of-information and protection-of-privacy laws. Her office's mandate doesn't yet include oversight of the private sector, but pending legislation may soon change that. In any case, she's clearly committed to engaging local companies in a meaningful dialogue about privacy.
Cavoukian's reach extends far beyond Ontario. She and her staff have developed enough of a reputation for leadership and innovative thinking that companies from the United States—where her job has no equivalent—regularly seek her advice. The day before my visit a delegation from American Express had come to discuss the company's brand-new suite of privacy initiatives.
"What I caution people against," Cavoukian said, "is throwing in the towel. It's still early days, and we can't give up just because people say 'You have no privacy, get over it.' So much has been written about the erosion of privacy that it makes you want to say 'Enough!' Let's take all that as a given, and focus on the exciting new things that are happening. In this decade we're going to see the emergence of a new breed of privacy-protective company. It's leading-edge."
Cavoukian shifted forward in her seat excitedly. "There's a book that predicted much of this back in 1997, when there was a lot of privacy erosion happening without much protection. It was one of those turn-of-the-millennium books (what's going to happen, lots of predictions, that sort of stuff), by two business types, Jim Taylor and Watts Wacker, called The 500-Year Delta: What Happens After What Comes Next. I loved their take on things. They said, and I can quote this because I use it so much, 'Here's a prediction you can take to the bank: Within a decade, privacy management will be one of America's great growth service industries.' Their argument was that privacy is becoming increasingly scarce, and as it becomes more scarce, it's going to become more valuable—and that means you'll soon find new businesses that are developing to try to protect it. I thought that was great. And you know what? It's starting to happen. For example, have you heard of Zero-Knowledge Systems, in Montreal?"
Cavoukian went on to describe the company as "in a class by itself" and "the Mercedes-Benz of anonymizer-technology companies." It sounded intriguing.
Hitting a Fly With a Sledgehammer?
It's a neat space to be in," Dov Smith told me, as we walked through the offices of Zero-Knowledge Systems. "The privacy space." Young and soft-spoken, Smith is the company's director of public relations, and he was giving me a tour of its brand-new headquarters, which occupy three floors of an upscale office building in Montreal's Latin Quarter. The design was spare, in a Bauhaus sort of way that implied a recent and significant influx of venture capital. Doors were made of glass, and clicked open only when employees flashed special cards at nearby sensors. Imposing stacks of sleek black computer equipment stood behind big hallway windows, quietly flashing little red and green lights. Tiny black halogen lamps hung over clusters of colorful retro chairs and tables in the central hallways, which formed a square around a large glassed-in atrium.
"We like to think of ourselves as a Silicon Valley company in Montreal," Smith said proudly. He showed me vending machines stocked with free juice and soda; a cappuccino bar with a pool table, a Ping-Pong table, and a dart board; an in-house cafeteria run by a local restaurateur; and bunk beds for anybody who might need to crash. Massage was of course also available—for a high-tech start-up these days, Smith said, it is "almost de rigueur."
Links to related material on other Web sites.
"On Guard" (Report on Business, September 2000)
A profile of Zero-Knowledge Systems and Austin and Hamnett Hill. By Konrad Yakabuski
Zero-Knowledge is a privately held company that was co-founded in 1997 by two brothers, Austin and Hamnett Hill, and their father, Hammie. It claims, quite simply, to be "leading the privacy revolution." Currently the only product the company offers is something it calls Freedom 2.0, which combines a free computer program with an international network of participating Internet service providers. Some basic privacy and security services are free, such as a personal firewall and an ad manager, but for $49.95 one gets access to a premium service that essentially amounts to an impenetrable online cloaking device. By wrapping information in multiple layers of the strongest encryption available and passing it through the Freedom network, Zero-Knowledge allows customers to establish as many as five untraceable pseudonymous digital identities, or "nyms," with which to browse Web sites and send e-mail.
Plenty of other companies have in the past couple of years jumped into the online anonymizing business. Many provide their services free, in fact. But none offers the pseudonymous segmentation of identity that Zero-Knowledge makes possible, and none makes the claim, as Zero-Knowledge does, that information about its users simply cannot be retrieved. Many anonymizer companies concede that if presented with a subpoena, they can, and indeed must, supply information about a given user's browsing habits and identity. This prompts skeptics to point out that if a company can access data about its users, then others (unprincipled government agents, hackers, snooping employers, litigious ex-spouses, criminals, and so on) can too, with or without a subpoena—and that means privacy isn't protected.
To avoid that bind Zero-Knowledge has invested a lot of time and money in developing cryptographic privacy solutions that, it claims, guarantee that it has no data on and—as its name implies—knows absolutely nothing about its users. "Some people might think we're hitting a fly with a sledgehammer," Dov Smith told me. "I mean, all of this crypto for e-mail and Web browsing. But we wanted to establish ourselves. We think we can become the dominant player in a multinational business that cuts horizontally through every market."
That stopped me short. It seemed quite a claim for a company operating in what had to be the rather limited niche of anonymous Web browsing and e-mailing. It called to mind a conversation I had had not long before with Ruvan Cohen, the president and chief operating officer of iPrivacy, a new and ambitious New York-based company that aims to enable private online buying and shipping—a tricky feat that almost nobody else is now attempting. "A lot of these companies float privacy up the flagpole," Cohen said about the anonymizers, "and then nobody comes. So to have five thousand customers, or even twenty thousand customers, the best of whom are Chinese dissidents and Kosovar rebels who don't want to be tracked when they're surfing, and the worst of whom are pedophiles and drug dealers—that's not a business that I would particularly want to be in. The truth is, how do you make money in an e-mail business? How do you make money in surfing? The only way you can do it is advertising. And the only way you can get advertising is if you're going to have customer information—and if you're going to use it. To me, the logic of that business model tends to fall apart." I agreed, and planned to press Austin and Hamnett Hill about such questions.
As we wrapped up our tour, Smith deposited me in a conference room and handed me a collection of articles that I "really should read" about the importance of cryptography, specifically for Zero-Knowledge but also for the privacy world in general. Then he went to find the Hill brothers.
In 1787, while serving as the U.S. ambassador in Paris, Thomas Jefferson sent a report to James Madison on the volatile situation in pre-Revolutionary France. "These views are said to gain upon the nation," he wrote. "The 1647 678.914 for 411.454 is 979.996.607.935 of all 789. The 404 is 474.872. And an 223 435.918 of some sort is not impossible."
The message was diplomatically sensitive, and to keep its contents private Jefferson had resorted to using a secret cipher that he knew only Madison could unlock. (Decrypted, the message read, "These views are said to gain upon the nation. The king's passion for drink is divesting him of all respect. The queen is detested. And an explosion of some sort is not impossible.")
According to Bruce Schneier, the author of Applied Cryptography (1995), the development and use of such codes was until recently "the province of learned people everywhere." After World War II, however, cryptography essentially became the secret and exclusive province of government. In fact, the cryptographic systems produced by computers in this country were considered so powerful and so important to the national interest that they were classified as munitions, and their export was eventually banned by the Department of State. But the advent of personal computers changed everything. Suddenly the idea emerged that cryptography could and should protect not only national secrets but also private personal data stored on and transmitted between computers. In 1991 a software engineer named Philip Zimmerman created and made freely available a powerful encryption program called Pretty Good Privacy. PGP soon made its way overseas, and the U.S. government—which strongly resisted the idea of putting top-grade cryptography into public hands, for fear of its abuse by unsavory elements—opened a criminal investigation into Zimmerman for, among other things, having exported a munition. Defenders of PGP and other forms of encryption rallied behind Zimmerman and made his case a cause célèbre, arguing that the expression of ideas in cryptography, like any other form of expression, is protected by the First Amendment. (Applied Cryptography, a sort of how-to manual, was written and published very much in that spirit.) The government investigated Zimmerman for three years before yielding to the inevitability of publicly available cryptography and dropping the case.
Steven Levy, the author of Crypto: When the Code Rebels Beat the Government—Saving Privacy in the Digital Age, exchanges e-mail with Toby Lester about the impact of cryptography on our daily lives.
Zimmerman became the model for a new breed of privacy activist—namely, one who uses computer technology to protect privacy. In 1992, inspired by his example, a band of mathematicians, computer scientists, and software engineers based primarily in the San Francisco area began to discuss ways to defend personal privacy in the computer age. They were brought together by an intense ideological commitment to privacy and free speech, and by an anarchistic mistrust of government and big business. They dedicated themselves to creating and widely disseminating the best cryptography possible, for all to use. They called themselves the Cypherpunks.
"Privacy is necessary for an open society in the electronic age," Eric Hughes, one of the original Cypherpunks, wrote in the opening of "A Cypherpunk's Manifesto," which he put online in 1993. The document continued,
People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers. The technologies of the past did not allow for strong privacy, but electronic technologies do.
We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money...
Cryptography will ineluctably spread over the whole globe, and with it the anonymous transactions systems that it makes possible.
The Cypherpunks' philosophy is extreme—they believe that cryptography and anonymous transactions should and will inevitably make the idea of the nation-state wither away—and their numbers are relatively few, but their influence has nevertheless been impressive. Their successful efforts to spread cryptography around the globe were a major factor in the U.S. government's decision in 1999 to relax its restrictions on the export of cryptography. And they have worked on and enabled a host of technologies that businesses—Zero-Knowledge Systems among them—are beginning to use to protect privacy online.
The Cadillac of Anonymizers
Eventually Austin and Hamnett Hill shambled in to meet me. In their late twenties, the brothers are already millionaires, from having created and then sold Canada's third largest Internet service provider. They both had goatees and wore black shirts and seriously baggy jeans. They had the habit of finishing each other's sentences. My first impression was that I was meeting two members of a white-guy rap group, but it faded fast. They had some very interesting things to say.
I started by asking how they had decided to create a privacy-protection business.
"The genesis of the idea," Austin said, "came from things like the PGP debates, which had a real civil-rights feel. We looked at that when we were selling our last business. People were so passionate about it, and we realized we had a chance to change what the Internet will look like ten years from now. We loved the idea that we could do something that would actually make a difference in the world—and that we might make a lot of money doing it. So we said, 'Okay, this is only going to get worse. The more computers are intertwined with our regular lives, the bigger these issues are going to be—'"
Hamnett jumped in. "Going in and redoing back-end systems and architecture and all that didn't seem to be realistic, so we started to try to think about what the best way was to chip away at this big block."
"Right," Austin resumed. "We said, 'What if we could take something like encryption but make it so simple that it would be to the privacy world what Netscape was to the Web—in other words, a platform that kicks off widespread change? What if we could build the ultimate consumer privacy tool?' That's how we came up with the idea for Freedom."
He paused and looked me in the eye. "It's the Dolby analogy," he said. "Who's Dolby's competitor?"
I couldn't think of one.
"Nobody can answer that question! You just kind of expect that audio equipment will have Dolby. People will soon have the same expectations with regard to their digital devices. They'll ask, 'Are they privacy-enabled?' You may not understand it, and it may just be a menu item, but you want to know that it's there, and that it's built in by default.
"Anyway, we thought that if we could come out with this tool, so that people could express their concerns or their passions about privacy by actively using something, we could get huge brand loyalty, and we'd be at the heart of the debate. We wanted to make ourselves the experts, the leaders in the privacy space. Then it wouldn't matter where things went, because people would come to us to solve the privacy problem, and we'd figure out a way to make a really good business."
Austin's cell phone rang, and he left the room to take the call. Hamnett picked up where his brother had left off.
"We were just, like, 'Hey, get the best brains in privacy—the best technologists, the best policy people—and focus on them.' The first group we had to get on board was the Cypherpunks, because they really do the best thinking about crypto systems, particularly about how those systems apply to social issues like privacy, and because they can just rip you apart if you don't do things right."
These efforts have paid off. Zero-Knowledge is awash in investment (as of this writing it has received more than $30 million), and its reputation as the Fort Knox or the Cadillac of anonymizers seems firmly established. I still wondered, though, as I listened to Hamnett, whether all of this made good business sense. When Austin returned, I asked.
"Not a lot of people understand the privacy space," Austin said. "Freedom's going to be only a small part of what we end up doing. It's just our first entry into the space. You know, a lot of people think privacy's like the weather: everybody talks about it, there's not a lot you can do about it, so the best you can do is build a niche market selling umbrellas. That's what some people think we're doing. It's a view that's rather limiting—but we're actually working hard not to change it for a while."
The brothers chuckled conspiratorially.
"We think privacy is going to be one of the biggest industries out there," Austin continued, "because it's a foundation-level industry that touches every single aspect of personal information. Think about how much business is predicated on the flow of personal information! If you need to add privacy as a foundation under all of that, what is that industry worth? It's huge. Billions and billions and billions. We're very glad to see other privacy players stepping into the mix, by the way, because it means that privacy really is becoming an industry. And an industry that has a marketplace of solutions has more chance for success."
In the weeks after my visit to Zero-Knowledge, I began to think about the historical relationship between technology and privacy. What interested me most was that people have always seemed to associate the arrival of new technologies with the invasion of privacy. It's a phenomenon that the privacy activist Robert Ellis Smith describes at some length in his new book, Ben Franklin's Web Site: Privacy and Curiosity From Plymouth Rock to the Internet (2000), a fascinating study of attitudes toward privacy in American history. "Each time when there was renewed interest in protecting privacy," Smith writes about the modern era,
it was in reaction to new technology. First, in the years before 1890, came cameras, telephones, and high-speed publishing; second, around 1970, came the development of personal computers; and third, in the late 1990s, the coming of personal computers and the World Wide Web brought renewed interest in this subject.
To find out more about the relationship between technology and privacy, and particularly between computers and privacy, I sought out Phil Agre, the UCLA professor. I asked him if part of the reason computers seem to be such a threat is that they were inadvertently designed without privacy concerns in mind—somewhat in the way they were inadvertently designed without the year 2000 in mind.
"Privacy wasn't left out unintentionally," Agre responded. "The main tradition of computer systems design originated in military and industrial contexts, where surveillance and control were taken for granted as good things. It was also informed by the ideology of technological rationalization, according to which there is a 'one best way' to organize the world, which it is the engineer's job to discover and impose. The command-and-control assumptions of that world view are deeply ingrained in the practices of systems analysis and design that are taught in school to thousands of engineering students every year, and that are reified in thousands of legacy computers that new computers need to be compatible with.
"That said, one area where the 'unintentional omission' story makes sense is the Web. Personal computers were shaped by a false idea about the person—roughly, the idea that each person is an island. They were also shaped by the need for great simplicity, given how small and weak the first personal-computer hardware was. So those early PCs didn't have real operating systems. The operating systems of mainframe computers had, and have, serious ideas about security—for example, means of preventing users from reading one another's files or trashing the system. But all those techniques went out the window on the personal computer. Not only was there no room for them but they were thought unnecessary, because there was only one user. But then personal computers were attached to computer networks. The computer network was treated as a peripheral device, like a printer, and the whole idea that one was opening the computer out onto a potentially untrustworthy domain was hard to comprehend, much less deal with technically."
In other words, we have inherited computer systems and communications technologies that—partly by design and partly by chance—are not inherently privacy-friendly. Lots of transparency has been engineered in, lots of security has been left out, and we're stuck with a system that from a privacy standpoint is distressingly leaky.
It is interesting to note here that no right to privacy is specified in the Constitution. This comes as a surprise to many people, who tend to assume that privacy is one of the bedrock rights upon which American society is built. But as Smith points out in Ben Franklin's Web Site, until the late nineteenth century Americans for the most part thought of privacy as a physical concept: if one needed to protect it, or just wanted more of it, one simply moved west, where there were fewer people likely to know or care what one was doing. By the closing years of the nineteenth century, however, things had changed: the frontier's limits had been reached, the population was growing rapidly, and a blitz of novel technologies had arrived.
Two of these were cameras and high-speed printing presses. For the first time, spontaneous, unposed pictures could be taken, quickly printed in newspapers and books, and distributed widely, all without the subjects' consent. This possibility was highly unsettling to many people (as it still is in remote cultures less familiar with photography), and it led to an article by Samuel D. Warren and the future Supreme Court justice Louis D. Brandeis—"The Right to Privacy," published in the Harvard Law Review, on December 15, 1890—that began to define privacy for the modern age.
"Recent inventions and business methods," Warren and Brandeis wrote,
call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls [in a famous treatise on torts that was published in 1879] the right "to be let alone." Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that "what is whispered in the closet shall be proclaimed from the house-tops."
... The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.
Warren and Brandeis's masterstroke was to document in the common law the presence of a "principle which protects personal writings and any other productions of the intellect or of the emotions," and to argue that "the law has no new principle to formulate when it extends this protection to the personal appearance, sayings, acts, and to personal relation, domestic or otherwise." In other words, the two men broadened the legal conception of privacy to include not only the tangible but also the intangible realm.
The Privacy Pragmatists
The argument set forth in "The Right to Privacy" has been enormously influential. One cannot help hearing echoes of it in, to take just one important example, the landmark privacy decision set forth by the Supreme Court in the 1965 case Griswold v. Connecticut. Striking down state laws that made the use of contraceptives by married couples illegal, Justice William O. Douglas wrote, in the majority opinion, "Specific guarantees in the Bill of Rights have penumbras, formed by emanations from those guarantees that help give them life and substance ... Various guarantees create zones of privacy."
Not long after that ruling the legal scholar Alan Westin published the groundbreaking study Privacy and Freedom (1967)—a book that, years ahead of its time, jolted the nation and the government into an awareness of privacy concerns in the information age. Arguing in the tradition of Warren, Brandeis, and Douglas, Westin made a compelling case that the Bill of Rights guaranteed a zone of privacy not only for one's person, one's sayings and acts, and one's relations but also for information about oneself. "Privacy is the claim of individuals, groups, or institutions," he wrote, "to determine when, how, and to what extent information about them is communicated to others."
Ever since the publication of Privacy and Freedom, Westin has been sought out as an expert on information privacy and business. Over the years he has served as a privacy consultant for more than a hundred companies—including American Express, Bank of America, Equifax, and IBM—and as a member of state and federal privacy commissions. Since 1993 he has been the editor and publisher of the influential bimonthly newsletter Privacy and American Business. During the past two decades he has worked on forty-five national public-opinion and leadership surveys on privacy. One of the results of his work on those surveys is that he has developed a widely cited taxonomy of American consumers, based on their attitudes toward privacy.
Westin divides the population into three categories. On one end of the spectrum are what he calls "privacy fundamentalists" (approximately 25 percent). They are deeply concerned about privacy rights and potential invasions of privacy, and they therefore reject any consumer benefits that require oversight of their activity or the release of data about themselves. The appeal of Zero-Knowledge, it would be fair to say, is to date largely limited to privacy fundamentalists.
At the other end of the spectrum are "the privacy unconcerned" (12 percent)—people who don't care to think about privacy, don't see any problem with giving their information away, and don't worry at all about how that information might be used. ("If McDonald's offered a free Big Mac for a DNA sample," Bruce Schneier told me, describing this attitude, "there would be lines around the block.")
Most people (63 percent) fall into an intermediate category that Westin calls "privacy pragmatists." Such people are always balancing the potential benefits and threats involved in sharing information, and are particularly concerned about what Ann Cavoukian described to me as "function creep"—that is, the secondary use (deliberate or inadvertent) of information that was originally divulged for one purpose only. Depending on what privacy pragmatists get in return for their information, they are willing to forsake different degrees of privacy protection.
From a business standpoint, this category is absolutely crucial. "The struggle over privacy today," Westin told me, after gruffly dismissing the idea that anonymizers will ever have broad appeal, "is the struggle over the minds and hearts of the privacy pragmatists. And infomediary work is where that struggle is going."
The word "infomediary" first attracted widespread attention when it appeared in the January-February, 1997, issue of the Harvard Business Review, in an article titled "The Coming Battle for Customer Information," by John Hagel III and Jeffrey F. Rayport. The authors wrote,
We believe that consumers are going to take ownership of information about themselves and demand value in exchange for it ... Consumers probably will not bargain with vendors on their own, however. We anticipate that companies we call infomediaries will seize the opportunity to act as custodians, agents and brokers of customer information, marketing it to businesses on consumers' behalf while protecting their privacy at the same time.
That article and a book that grew out of it—Net Worth: Shaping Markets When Customers Make the Rules (1999), by Hagel and Marc Singer—introduced a new business model for the information age. Already, lavishly funded companies with such names as Persona, Privada, and Lumeria have begun to put it to the test.
"We're Your Agent"
I called up Fred Davis, the founder and CEO of Lumeria, which has its headquarters in Berkeley, California, to find out more about the infomediary business. Davis is a longtime computer visionary and entrepreneur who has been involved in, among other ventures, the start-ups of the Ask Jeeves search engine, the technology company C|net, and Wired magazine. He has also been the editor of MacUser and PC Week magazines, and is the author of thirteen computer books, including The Windows 98 Bible. He's a manic character who seems to operate in a permanent fast-forward mode, speaking in unstoppable gushes of enthusiasm and self-promotion. All I had to do was mention that I was interested in privacy, and he was off and running.
"Sure! I'm always happy to talk about my favorite subject!" he told me. "Privacy is perhaps the biggest social issue of the Internet age, and today's practices don't just suck, they're downright unconstitutional! The Internet was never designed to be private, and in the early days there were attempts to take advantage of the fact that the technological infrastructure was designed to have everything open, and that there was no social infrastructure. People want to close their eyes to this, but you know what? Gays and people of color are targeted for hate crimes, and abortion doctors are targeted, all based on information that gets out over the Internet. Hello? What? It's a serious problem.
"Basically, when we started up, three years ago, I said, 'I don't want my privacy invaded anymore, and I don't want anybody stealing or selling my information without (a) my permission and (b) their cutting me in on the sale.' What we figured out pretty quickly was that we needed to help people protect their identity. The knee-jerk reaction to privacy problems has been to take your identity away completely. That's what these anonymizers do. But that's a horrible thing too. Think about it: if we make everybody anonymous, they're going to lose the value of their identity, and they're not going to be able to benefit from who they are. That's what this whole Net Worth book was about: there's five billion dollars sitting on the table for the company that figures out how to give people control back over their information! It's huge! If we pull this off, we're a Fortune 500 company!"
I asked how Lumeria had been designed.
"Our model was that the individual should be able to control what information is shared with what entities—people, Web sites, commerce partners, whatever. What we needed was a system that could present information about you without revealing who you are, not even to us at Lumeria. So what I did was, I went out and hired a bunch of hackers and security nuts, and said, 'Let's re-engineer. Let's create a system that is comprehensive enough so that even when all of your information—browsing habits, medical data, bank accounts, school transcripts—becomes digitized and moves into the Internet age, you can have a unified way to control and reveal and protect it.' Basically, we created a new piece of Internet infrastructure for the secure communication and authentication of transmissions across the Internet. It took us a few years and millions of dollars to develop it, but now it's here, and it's pretty cool. The consumer has complete control for the first time. And the legislative future plays into our hands. Soon we'll be a compliance mechanism for new privacy regulations. It's a great sell. It's a no-brainer. Businesses won't have to understand all of the great things we do for our customers. It's just, the feds are going to bust you if you don't use it!"
Davis went on, and on, sometimes in great technical detail and always at lightning speed, but as I understood it, the gist of his plan for Lumeria was this: A customer will store personal data in what is called a SuperProfile. The more specific the information stored (about such things as age, sex, family status, sexual orientation, income level, assets, consumer preferences, and current shopping interests), the more valuable that profile will become to advertisers, who will pay handsomely to participate in Lumeria's network. They will do this because Lumeria will give them the chance to do highly targeted, permission-based marketing—to offer special deals on, say, new cars or house-painting services or plane tickets—exclusively to people of a predetermined demographic profile, and often only to people who have already expressed an interest in the very things being advertised. Most of the money from advertisers will go directly to Lumeria's users; Lumeria will take a small cut.
According to Davis, this is a win-win proposition. Advertisers will save billions of dollars formerly spent on wasteful direct-mail, radio, TV, and print advertising campaigns, and will be better able to cultivate long-term relationships with preferred customers, by giving them exactly what they want. Consumers will be able quickly to get information about and find the best deals on whatever products they're interested in—and will get paid for doing it, simply by being a part of Lumeria's network.
I asked Davis whether gathering all of that data from consumers wouldn't create what is often referred to in the security business as a "honeypot"—an alluring mass of valuable information about consumers that is a natural target for privacy invasion.
"The thing is, remember, we don't have a database," he said. "We're not like some of these other guys, who put their information in a database and then just say they won't reveal it. What if you buy their company? Then what? Then you have their data! Our system is different: it distributes data across the Web. Nothing resides in a central database, and we have no way of gaining or granting access to your data. Only you do. We're just a platform for infomediation, and this means that every one of our value propositions is consumer-facing. We work for you, the consumer, as an agent to extract the maximum value for your identity. We help you copyright your profile. Not only that, we take your click trail and consider that a unique work of authorship. We'll even allow members to police the system, create mini-class actions, and—finally!—make prosecuting privacy violations cost-effective. That's aggressive. We're fighting fire with a neutron bomb. We're your agent."
Links to related material on other Web sites.
"Chief Privacy Officers: Forces? Or Figureheads?" (Computerworld, November 13, 2000)
"The sudden interest in appointing chief privacy officers (CPO) stems as much from fear as it does from the desire to protect customers." By Kim S. Nash
Anonymizers and infomediaries aren't the only players in the privacy space. One of the hottest new jobs in certain sectors of the economy is that of chief privacy officer, or CPO. This is by no means just a Silicon Valley fad; rather, it represents a certain maturing of businesses' approach to privacy. "It's really healthy," Austin Hill told me. (Hill helped to start the trend, in fact, by hiring Stephanie Perrin, his policy expert, and by designating her Zero-Knowledge's CPO.) "What we're discovering," he added, "is that lots of places are hiring CPOs, often as a result of public-relations concerns, but then these guys are turning around and telling them, 'Hey, you've got some real problems. We've got to pay attention to this stuff.'"
All companies whose business in one form or another involves the management of personal information will probably end up having chief privacy officers in the near future. American Express, AT&T, and Microsoft already have them. So do companies as varied as Delta Airlines, Mutual of Omaha, the Royal Bank of Canada, and Equifax. Ahead of the game as always, Alan Westin has very recently created the Association of Corporate Privacy Officers, currently the only organization of its kind, and has begun to run a highly acclaimed training course for new CPOs. I asked him about the emergence of this new line of work.
"In the United States," he said, "the private sector has announced that it isn't in favor of federal privacy regulation covering the whole privacy sector. So what we've had is some sector-specific legislation, in financial services and health, but self-regulation elsewhere, especially on the Internet. Internet companies are now supposed to develop their own privacy policies. But as soon as you say you're going to self-regulate, you've got a problem. Who's going to develop the policy? Who's responsible for tracking legislation? For tracking competitors? Who's going to train employees? Who can do risk assessments? Who can do damage control in the press if there's a privacy firestorm? People pretty soon saw that they needed CPOs."
Even if not every company will need to hire a CPO, the signs are that the services of privacy consultants are going to be in regular demand from now on—to help companies develop overall privacy policies, to examine new and existing information-management systems for "seepage," to assess compliance with new legislation, to provide outside verification of a company's privacy-protection claims. New companies are emerging to provide a whole range of such services—an impressive example is Fiderus, based in North Carolina, which was founded by a former worldwide director of security and privacy for IBM, and which claims to be "the first company in the world to focus entirely on services and solutions for security and privacy." Zero-Knowledge has recently announced a new service, to be called Managed Privacy Services, that will combine consulting and technical solutions in order to "enable businesses to comply with privacy legislation, maximize customer relationships, and build consumer trust." But existing companies, too, are launching themselves wholeheartedly into the business. Big Five accounting firms have begun doing privacy audits, in an attempt to cash in on their long-standing reputation as trusted, impartial third parties. More than 200 companies have already submitted to audits by PricewaterhouseCoopers, including Microsoft's travel Web site, Expedia.com. Even though the main value of such audits at this point is that they build public confidence, business seems brisk all around. "We've seen a dramatic increase in demand for our privacy services in the past couple of months," I was told by Gary Lord, a partner and the chief technologist of the information-risk-management practice at KPMG, one of the Big Five firms. Rather than looking at the costs associated with addressing privacy issues, "companies are asking, 'What is the cost of us not doing this?'"
I called a prominent privacy consultant, the legal scholar Fred Cate, to ask about this surge of mainstream corporate interest in providing privacy services. Cate teaches information law at Indiana University and is the author of Privacy in the Information Age (1997) and The Internet and the First Amendment (1998). "These days you're nobody if you're not a privacy consultant," he told me. "There's no law firm, no accounting firm, nobody at all, who's willing to admit to you that they don't know anything about privacy. It's big, big business. That's something to remember about all of us, you know: there are selfish reasons to beat the privacy drum in addition to whatever 'legitimate' reasons there are. I make money as a privacy consultant. These companies all make money because people are worried about privacy. The advocates raise funds because privacy's a key issue. And politicians get enormous positive ratings simply by talking about privacy."
A staggering number of bills to help rein in the corporate use of personal information are under consideration around the country. "People I've talked to on the Hill believe that privacy will be one of the hottest issues, if not the hottest issue, for the next three to five years," I was told by Stephen Lucas, a leading privacy-policy adviser who is also the CPO for Persona. "It's hard to find a person on the Hill who isn't involved in this issue in some way, shape, or form. During the hundred-and-fifth session of Congress more than a hundred privacy-related bills were debated at the federal level—and more than a thousand bills at the state level."
The flurry of interest certainly is remarkable. But Fred Cate, for one, worries about the wisdom of proceeding with such haste to address an issue that, because it is evolving so quickly, is vexingly hard to get a handle on. "People are being fed expectations—by the media, by politicians, by privacy advocates, by companies trying to sell privacy services—that they've never had before," Cate told me. "And as soon as somebody tells you to worry about something, it's hard not to. But it's not a bad idea to understand a subject before you regulate it. We're not talking about biological or chemical warfare here. Waiting another year would not be the end of the world. We've already got a gazillion laws that protect privacy, and to a certain extent they work or don't work based on your faith in the political system."
In Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (1998), the Yale political scientist James C. Scott has written, "The modern state, through its officials, attempts with varying success to create a terrain and a population with precisely those standardized characteristics that will be easiest to monitor, count, assess, and manage." This aspect of government is a natural one—personal information is as necessary and as valuable to efficient government as it is to efficient business. But there are plenty of people who take a decidedly dark view of it, and who therefore have very little faith in the ability of the political system to protect privacy. Recent revelations, in the midst of the rush to pass privacy-protection legislation, that the FBI has been developing and putting to use a sophisticated and questionably legal global computer-eavesdropping system—regrettably called Carnivore—haven't helped matters any. In the resulting clamor references to Big Brother abound.
One person who has very little faith in the government's commitment to privacy, particularly when it comes to financial matters, is Robert Hettinga, an entrepreneur in Boston. I was referred to Hettinga by Phil Agre, who intriguingly described him as "the head cheerleader for a loose circle of financial cryptographers who want to build a parallel global financial system that the government cannot tax or audit." According to Agre, "This is not an implausible goal."
I met Hettinga one day last summer, at a luncheon meeting of the Digital Commerce Society of Boston, which he founded in 1995, and we struck up a conversation about privacy. Soon after, I called him on the phone to hear more of what he had to say. After making it clear to me that financial privacy wasn't really his goal but simply an inevitable result of his business plan, Hettinga explained the essence of his company, the Internet Bearer Underwriting Corporation (IBUC), which he established in 1999.
"The idea that financial privacy can be cheaper than transparency started out as a straw man," he told me, "but every time it got knocked down, it got back up a little bigger. Now it's the size of, you know, that marshmallow man in Ghostbusters. But anyway, here's my rant. I like to claim that the reason that we have to put up with taxes, and with regulations that invade our privacy, and with people calling us to sell stuff, and with spam and all that, is that we have to keep records of who we do business with, in case they lie to us. It's embedded in how we do business these days. It used to be that you would hand over a token in exchange for goods of some sort. That was called a bearer transaction. You didn't need to know anything about the person you were doing business with, because you knew what money looked like, and the transaction executed, cleared, and settled all at once. That's how cash works, by the way. It's anonymous and very efficient. Until World War Two it was the primary means of payment for all but the largest transactions. But when telephony and mainframe computers allowed transactions to execute and settle at a distance, we ended up making trades by recording them in a ledger or a database and then locking away whatever physical securities there were in a vault somewhere. That's called book-entry settlement. It's hugely invasive of privacy, but it's about three orders of magnitude cheaper than 'my Brinks truck to your cage,' so it's what we do."
Hettinga bubbled along exuberantly. "The rise of book-entry settlement has helped create a more invasive government, because the state, as lie enforcer, if you will, becomes an integral part of the entire economy. You have to know who it is you're doing business with so that you can send them to jail if they lie to you. But within the past fifteen years or so Internet financial cryptographers have created fairly good anonymous digital-cash protocols. This means, once again, that there's no need to know or care who you're doing business with, because transactions can execute, settle, and trade all at once. So now we're back to bearer transactions—Internet bearer transactions—and the costs of doing business can drop dramatically. You reduce transaction costs, and that reduces firm size—down to the device level, eventually. With Internet bearer settlement you don't need lawyers, or accountants, or even billing. All that goes away. The need for regulation goes down. In the end you get privacy because it's just cheaper, not because it's private."
Hettinga's sights are for the time being set on enabling what he described to me as a fraud-resistant form of "micropayments"—or, as he put it exactly, "functionally anonymous bearer-cash systems that do very small streaming transactions." ("We can do down to a thousandth of a dollar fairly easily," he told me. "Probably a millionth of a dollar, sooner or later.") "Minting" money in these denominations was simply not cost-effective until recently, but there are already lots of potential applications. To take just one appealing example, Hettinga suggests that IBUC micropayments could resolve the current debate among consumers, musicians, and the music industry about the exchange of music over the Internet. Consumers would pay only a tiny amount each time they downloaded a piece of music, but collectively musicians and the music industry would still make plenty of money, perhaps even more than before.
Links to related material on other Web sites.
"Passport to Sealand—but nobody is laughing now" (The Sunday Telegraph, April 3, 2000)
"It was meant to be a bit of fun when Prince Roy and Princess Joan set up their 'kingdom' in the North Sea. But they hadn't reckoned on the mafia or Colombian drugs cartels, reports Ian Cobain from London." (Posted by The Sydney Morning Herald.)
Hettinga's ideas may seem radical, but they're nothing compared with what is perhaps the most extreme scheme now being put to the test for keeping private data out of the hands of government. A new Cypherpunk-motivated company, called HavenCo, has plans, already being implemented, to set up an offshore data haven on a rusting and abandoned World War II anti-aircraft platform some six miles east of the coast of England. In 1967, when the platform was considered to be outside British territorial waters, it was "occupied" by a former British army major named Roy Bates, who named it The Principality of Sealand and declared it an independent state. Bates soon designated himself, his wife, and their son as Sealand's "royal family," and began issuing passports, coins, and stamps. Things were generally lackluster for the new nation, however, until 1999, when representatives of HavenCo negotiated with Bates and his son for the right to place a data haven on Sealand—and got "specific hands-off guarantees," HavenCo's chief executive officer, Sean Hastings, told me in an e-mail, "from the local government authorities" (the royal family, that is). The result, Hastings claims, is that HavenCo is "making real privacy available."
A Matter of Trust
Making real privacy available? That's something that everybody now entering the privacy space claims to be doing, in one way or another. But most of what's happening at this point is still theoretical at best, and all sorts of difficult questions remain unanswered. How much legitimate cause for concern is there about privacy? Is it really something we should worry about—more than, say, racial tensions or global warming? Do we have less privacy than ever before—or do we have more? Are new technologies the problem—or the answer? Can privacy be good for business, and business good for privacy? Is it a commodity that can be bought and sold, or is it an inalienable human right?
The debate over these questions illustrates one irreducible truth: privacy is not so much a legal or technical concept as a social one. "The dominant feature of the current privacy debate," Fred Cate told me when I asked him to try to sum things up, "is its irrationality. The drivers are emotional." I think he's right. The crucial question about privacy today is the same it has always been—namely, whom should you trust?
A lot of people instinctively don't trust technology, especially in the hands of businesses, to protect privacy. But, as Robert Ellis Smith and others have pointed out, contemporary notions of privacy have in many cases evolved not despite new technology but because of it. "Privacy," the influential journalist and editor E. L. Godkin famously wrote, in Scribner's magazine in 1890, "is a distinctly modern product, one of the luxuries of civilization." Phil Agre made a related point to me, a bit more bluntly. "The idea that technology and privacy are intrinsically opposed," he said, "is false."
There seems to be plenty of evidence to justify that claim. One of the earliest technologies, writing, enabled a new and enduring form of private communication. The printing press popularized reading, an intensely private affair. The wristwatch privatized time. Cheap and widely available mirrors allowed, literally, a new level of private self-reflection. The gummed envelope boosted expectations of privacy in the mail. The technological advances of the Industrial Revolution led to the creation of a prosperous middle class that could afford to build houses with separate rooms for family members. The single-party telephone line allowed for direct, immediate, and private communication at a distance. Modern roads and mass-produced automobiles made private travel possible. Television and radio brought news and entertainment into private homes.
Then, of course, came personal computers, the Internet, wireless devices, biological engineering, and more—the full effects of which on the evolution of our notions of privacy are yet to be determined. This evolution will be one of the more interesting developments to watch in the twenty-first century. Nothing is clear yet, of course; but if history is any guide, a good place to get a sense of what's to come will be the databases of the U.S. Patent and Trademark Office.