The successful breach of John Podesta’s email account is the latest high-profile hack to expose thousands upon thousands of private missives to the public. With the Sony Pictures debacle in the recent past and no prospect of perfect digital security, more breaches seem inevitable. So why do email providers, organizations, and individuals only seem interested in one approach to thwarting thefts?

A difficult-to-guess password, two-factor authentication, attentiveness to phishing schemes, and security features at companies like Google and Apple are all variations on a theme: All are attempts to secure an email account so that unauthorized folks can’t break in. Once a breach happens, the game is lost. But what if that were just the beginning?

* * *

Say that you’re 70 and you unexpectedly learn that you require a surgery that will keep you in the hospital for a week. You adamantly don’t want any of your grandkids to find the will in your house that reveals who among them gets what. You suspect they’ll be snooping against your wishes. And you have 12 hours at home to prepare. You could pick a hiding spot that they probably wouldn’t guess but might find. You could put the will in a padlocked trunk and take the key with you.

But what if they still find some way into the trunk?

In fact, your grandson does find a way to remove the wooden bottom, look through its contents, replace them, and reseal the trunk without you even knowing.

Luckily, you anticipated this possibility. When he breached the trunk, he was able to lay hands on your will in its envelop marked “Last Will and Testament.” But the trunk included 30 different envelopes that all said “Last Will and Testament.” Each held a list with very different bequeathals. Your grandson had no way of knowing which was the real will, though he held it in his hands. Only your lawyer knew which one was real.

Hiding something is one way to keep it secure. Overwhelming would-be snoops with plausible decoys is another way. Yet virtually no one’s email inbox is deliberately seeded with fake messages so that prying eyes cannot entirely know what is real.

Imagine a startup called Plausible Deniability LLC.

When I first conceived it years ago, I thought that the firm would be given access to the Google accounts of its customers, for the purpose of obscuring search histories. Over time, at random moments, it would perform searches generally regarded as embarrassing. “Donald Trump sex tape.” “Signs you have an Oedipus complex.” “Herbal cures for chlamydia, gluten free.” “What is Aleppo?” “Why is my credit score so low?” “Does huffing cat litter cause erectile dysfunction?”

The company would not store any data on the searches that it performed, only that a given person was a customer and therefore not necessarily responsible for a search.

I used to think Plausible Deniability could start off marketing itself to Stanford strivers who didn’t want to be blackmailed by a hacker during their 2032 Senate run. “My fellow Californians, I never searched for ‘molly delivery.’ Like some of you, I’m a customer of Plausible Deniability to protect my privacy from snoops.” But I didn’t want would-be politicians helped so much as the person who hesitates before conducting a search like “suicide helpline” or “free anonymous HIV testing.”

Success would’ve depended in part on whether Plausible Deniability became common knowledge. After all, some would be put off by the danger of signing up for a service that generates fake searches. What if everyone later believed that a fake was real? The concept would only work if lots of people came to know about the tactic, and it was used by lots of people, not just outliers trying to hide a terrible secret.

In fact, that hurdle, along with the fact that profitability would spawn instant competitors, meant that the only likely path to a new norm of plausible deniability would’ve been a company like Google or Yahoo offering such a service free to its users. Had the idea expanded to email, John Podesta’s account would’ve included “sent emails” that he never actually sent, alongside the actual emails that make him and some of his correspondents look bad. (Would fake messages appear only in sent mail, or would contacts occasionally get messages that they’re texted to disregard or that bypass their inbox but live in their archives?) If the fakes had ranged from the wildly implausible, like a note about how Hillary Clinton was the true killer of JFK, all the way to the fake but totally plausible, like a note to George Soros complaining about Stephen Bannon, Podesta would’ve gained plausible deniability.

To be clear, I would dislike that outcome in his case. I agree with Glenn Greenwald that knowing the contents of some Podesta emails is in the public interest. There would be lots of instances in which Plausible Deniability would be bad for journalists. It should certainly be illegal to use for official government business.  

But I wouldn’t mind if Plausibility Deniability gave Podesta cover for the purely private emails in his archives—“Dear Larry, Just FYI, that was, indeed, one of the algorithmically generated decoy emails!”—and especially if it gave cover to private citizens when their communications are hacked and posted online. A price of the privacy most people want is always that a few bad actors aren’t fully exposed.

To me, the benefits that privacy confers on a free society are worth it.

Stepping back, the big idea is combining pessimism about our ability to really protect anything from being seen in today’s digital world with the insight that, in some cases, having verifiably created a fake personal-data trail, or contracting with a third party who provides that service, could offer plausible deniability that enhances privacy. I’ve also wondered if, say, a feminist, gaming journalist who keeps getting doxxed would find it easier to keep her current phone number or address from the Internet hordes if she flooded the web with fake phone numbers and addresses, in addition to trying to keep her actual phone number and address closely held. And if the Office of Personnel Management had kept 10 fake background files alongside every real one, maybe the OPM hack would’ve proved less costly.

It’s easy to imagine other iterations of this idea. No approach is without drawbacks and none offers perfect privacy, either. (In fact, it’s probably a good thing that there will always be ways to verify the realness of some information with enough effort.) Still, this largely unexploited approach seems to promise more personal privacy in a world where that good is rapidly disappearing in many unfortunate ways. We operate under the delusion that the traditional approach to information security is adequate. How many major hacks will it take to persuade us, at the very least, to send this article to a shortlist of contacts with a note that says, “For future reference, I’ve already started doing this for plausible deniability”?

That email might come in handy one day.