On Friday, millions of connected devices—webcams, routers, DVRs—banded together to attack a fundamental cornerstone of the internet’s infrastructure. It happened suddenly, without the knowledge of the gadgets’ owners, and it kept going for hours.
Using a virus called Mirai, hackers commandeered insecure internet-connected devices worldwide and instructed them to throw relentless, repeated bursts of data at a target—this time, an essential service that helps route online traffic to the appropriate destinations—with the goal of overwhelming it. (This is called a denial-of-service attack.) Brian Krebs, a prominent security expert and independent journalist, suffered a Mirai-powered attack on his website just last month. After that, the malware’s source code was made free and available for anyone to access, leaving little doubt we’ll see more of Mirai in the near future.
Is there any way to prevent repeat performances? Ideally, law enforcement would go after the hacker that launched the attack, but it can be hard to attribute a distributed assault like this to a perpetrator, who might have orchestrated it from some overseas location that’s out of easy reach for the U.S. government.
One idea would put more of a burden of legal responsibility on the manufacturers that produce connected devices—and hold them accountable if their products were implicated in cybercrime.
The best candidate to nudge those manufacturers toward better security is likely the Federal Trade Commission, which first convened a workshop about “internet of things” security three years ago, eventually publishing a detailed report of its findings in 2015.
But despite its sustained focus on connected devices, the closest the FTC has gotten to punishing a company for selling insecure products was earlier this year, when it settled charges with ASUS over flawed internet routers. Michael Zweiback, an attorney at Alston & Bird and a former federal prosecutor, thinks that’s a missed opportunity. “Instead of talking about the future prospects of what the internet of things actually is going to mean from a security standpoint, I think they have to act,” Zweiback said.
Even if the FTC did immediately start throwing some enforcement muscle around, the sea of poorly secured connected devices already out in the world will continue to haunt us for some time. Gadgets that have default passwords hard-wired into them, making the passwords nearly impossible for users to change, can’t be remotely patched to prevent them from being exploited again in the future. That’s why a Chinese electronics manufacturer said this week that it would recall millions of its webcams, which were found to have participated in Friday’s attack.
The history of the internet of things reveals some hints about why so many connected gizmos are virtual time bombs. “[The internet of things] very much parallels the way the internet grew up,” said Edward McAndrew, an attorney at Ballard Spahr and former federal cybercrime prosecutor. “We rushed to it so quickly that security was largely left behind—in part because it was so awesome. Who doesn’t need a refrigerator that can reorder milk for you on demand?”
Many companies failed to prioritize security in the design process as they rushed to bring new connected devices to market. McAndrew and Zweiback suggest that the FTC could target those flaws with its power to investigate and punish “unfair or deceptive acts or practices” which cause harm to consumers.
When it comes to a botnet—a zombie horde of devices that have been hijacked to do a hacker’s bidding—things can get more complicated. Who can claim harm when millions of unsuspecting webcams and DVRs start attacking a single target? It might be the person who bought the device, says McAndrew, because it runs more slowly or doesn’t function as intended. Or it could be the target of the coordinated attack: Krebs, for example. Last week, the internet infrastructure that came under siege belonged to a company called Dyn—but the ensuing outage affected millions across the U.S. Who can claim injury there? The answer isn’t yet clear.
There might be an alternative to government action: Perhaps an individual or a company could sue manufacturers of faulty devices directly for their negligence. Steve Rubin, a cybersecurity lawyer at Moritt Hock & Hamroff, says the legal framework for such a suit may already exist in tort and contract law. A manufacturer would be in breach of contract, for example, if it sold a product it claimed was safe but that wasn’t.
A civil suit against a manufacturer for leaving its products vulnerable to botnets would take a “smart and creative lawyer,” said McAndrews. “They would be in uncharted territory.”
Without some sort of legal risk for device manufacturers that put out faulty and dangerous machines, the lawyers agreed, it could be very hard to raise the standard of internet-of-things security. (Of course, for attorneys who specialize in cybersecurity, more internet-security regulations usually means more work.)
Regulation—especially in dizzyingly fast-developing technology—always comes with drawbacks. More rules and legal risks could mean slower development, higher prices, and more daunting barriers to entry for startups hoping to hop aboard the internet-of-things train. And government hasn’t shown that it can keep pace with nascent technologies, so creating a flexible enough framework that protects consumers while leaving room for growth would be a formidable challenge.
But if more regulation avoids a catastrophic denial-of-service attack in the future—one that brings down an electric grid, for example, or that darkens the internet at a sensitive time like election day—some regulation might be very worthwhile.