If you're not paying for a product, the saying goes, then you're the product being sold.
Another way of saying this is that you and Google -- and you and Twitter, and you and Facebook -- do not enjoy an egalitarian relationship. The digital world, given its crazy capacity to scale, inverts the core, crusty logic of the consumer/producer relationship -- the customer is always right -- and turns it on its head. It puts the power almost singularly on the side of the product-providers.
For the most part, we consumers have very few modes of recourse when it comes to dealing with the companies that feed our digital addictions. It's either shut our accounts or shut our mouths. There aren't many options in between.
There may be a way to change that, though. In an essay in The Nation, the philosophy professor Evan Selinger and the legal scholars Ari Melber and Woodrow Hartzog make the case for re-imagining the digital contracts that govern our relationships with the companies that provide so many of our experiences of the Internet. A "People's Terms of Service," they argue, would attempt to reframe standardly opaque TOS agreements -- arrangements that often amount, the scholars say, to "contract abuse" -- in order to make them more equitable to consumers.
One thing a "People"s Terms of Service" would challenge is the Objective Theory of Contract, the doctrine that attempts to ignore the context in which contracts are negotiated and agreed upon. "The law currently protects one-sided contract arrangements," the authors write, "by assuming they were fairly negotiated, and thus reflect a 'meeting of the minds' by equal parties." Yet that assumption, in a world of boilerplate jargon and pages-long disquisitions, is no longer a fair one to make. "After all," they argue, "these contracts are usually created through user confusion and one-sided demands. How can citizens even bargain with a standard, take-it-or-leave-it form?"
The People's Terms of Service Agreement, instead, would be informed by consumers' desires as well as corporations'. It would be user-friendly. It would "use plain English, not legal jargon." It would be short enough for people to realistically be able to read it. Most of all, it would embrace five values: security, confidentiality, transparency, permanency, and respect for intellectual property.
Permanency: The contract cannot be unilaterally altered, period. This seemingly obvious rule, which applies to most contracts, has been undermined by technology companies that attempt to reserve the right to alter their terms of service without meaningful consent from users. A People's Terms of Service would require meaningful opt-in from users for any material changes. Users would also retain the right to have all their materials permanently deleted if they choose to leave the site, and that provision could never be altered.
Transparency: Companies promise to be transparent and provide meaningful notice to the individual regarding its collection, use, dissemination, and maintenance of personal information.
Intellectual Property: Companies respect the value of an individual's name or likeness and the copyrights of user information and work posted on the sites by taking minimal licenses for administrative use, or provide some profit-sharing for advertisements other commercial use of a user's name or likeness. (This is an important option for companies building a business model on monetizing user-generated content).
Confidentiality: Companies promise not to disclose personal information to third parties, unless users meaningfully opt-in to such disclosure for each party. (As always, they would still respond to government and legal requests.) Facebook itself makes a similar promise. To ensure protection, companies also promise to contractually ensure that recipients of personal information are obligated to respect the anonymity of any transferred data sets, and to provide users with control over data portability so it's easier to leave a given service.
Security: Companies promise to use appropriate and industry standard security safeguards protect all media against risks such as loss, unauthorized access or use, destruction, modification or unintended disclosure.
Those core ideas, the scholars say, are a starting point -- values that interested users and consumer advocates could debate and then, perhaps, draft into a model contract. The goal would be to leverage the force of collective action to counter the power enjoyed by digital behemoths. "The power of even a few million social network users, when coordinated," the authors write, "could push companies to offer more consumer rights as a good business decision." (The idea was inspired, in part, by recent class-action suits against Google and Instagram and Facebook -- the last of which accused the company of co-opting their identities in online ads. In response to that action, Facebook revised its "Statement of Rights and Responsibilities," in addition to offering the plaintiffs a $20 million settlement.)
And while the authors acknowledge that consumer power won't be easy to leverage -- you'd need an enormous community even to come close to matching Google's clout -- they believe people have to try. "We're finally moving past the simplistic notion," they argue, "that one-sided corporate agreements are an unavoidable 'cost' of using social media -- as if every company's corporate policy must be accepted as the automatic baseline." That's not how we regulate powerful companies like BP, they point out. "Why should our attitudes be more lax towards Google?"
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.