A couple weeks ago, ahead of her appearance at the Aspen Ideas Festival, Josephine Wolff, a professor at the Rochester Institute of Technology, wondered when it might be appropriate to punish careless computer users for their unwitting role in enabling hackers and their attacks on cybersecurity. “Very rarely do we grapple with the question of whether, perhaps, the only way to get individuals to take this seriously and actually change their behavior––to be more attentive to issues of security––is if there are concrete penalties and consequences associated with participating in bots, falling for phishing attacks, failing to install security updates, and other basics of computer hygiene,” Wolff wrote.
Many readers begged to differ. Vincent Williams has moral and practical objections to the proposal:
If you punish people for getting hacked, sure, over time you may force botnets to shrink on average or see positive results by whatever the selected metric is, but almost assuredly you will first see a contraction in the number of Internet users in the world. People use the Internet because it is convenient. People own Internet connected devices because they are the most convenient method of harnessing the convenience of the Internet. When you fine people because they have been deemed negligent in their use of something they use because it is convenient, it loses its value. Once it has lost its value, people will abandon those devices and the Internet itself in many ways.
If the Internet is intended to connect people, how are we aiding in the fulfillment of that goal when we take actions that have a high likelihood of leading to people disconnecting? How is that good for businesses that generate large percentages of their revenue via the Internet?
Beyond the economic and philosophical reasons this is a terrible idea, there are ethical reasons this is untenable. To punish people for getting hacked is just plain unethical.
If I do something I know to be wrong and I get caught, I expect there to be negative consequences. However, if I am minding my own business and using a device in a way I believe to be correct and within the bounds of the law, and subsequently am fined, that is unexpected. In my mind, the negative outcome for the user is unjust.
The tenor of the article leads me to believe the idea for it came from an academic who has been using and thinking about modern technology for so long, they can’t realize how complex modern technology is now. The dismissive tone Ms. Wolff takes in describing common attack vectors hackers use when assembling botnets and what she describes as “basic computer hygiene” screams that she does not have any grasp of how the typical user views their Internet connected devices. When the typical lay person purchases a device such as a smart phone or laptop, they unbox it, charge it of need be, power it on and expect it to work. They don’t know that a piece of software needs an update unless it creates a dialog box telling them so. Hell, their only conception of software versioning is words like Snow Leopard and Jellybean and strings of inscrutable dotted decimal numbers.
Ms. Wolff obviously doesn’t understand that she and the technologically initiated like her are modern-day wizards, keepers of arcane knowledge and mysterious methods both revered and reviled. My advice to Ms. Wolff is that she take a field trip down to her school’s tech support desk and listen in on some calls or even take some herself. She will still need to realize these are college students calling in, individuals who, for the most part, have grown up surrounded by technology and come from above average homes in a socio-economic sense. Then perhaps she will understand the state of the public’s cybersecurity acumen and see why so often we focus on education and awareness when contemplating how to combat the rise of cybersecurity threats.
Charles McGuiness questions how much blame end-users deserve:
How can it be a thing that clicking on a link can cause such harm? Why are systems sold to home users with this weakness? Why must the users be constantly mindful of the weaknesses of their system and serve as the firewall of last resort? And why is the professor focused on home users, who may contribute one computer to a bot net, but not to corporate or government users, whose mistakes can put millions of people at risk for fraud and theft?
I have a degree in computer science and decades of experience as a practicing professional. I know, exactly and precisely, how these trojans spread, how the software is written, and how the hackers trick users into launching malware. I administer and defend servers on the Internet that register attacks with dizzyingly frequency.
But I do not share the urge to blame the victims, who can range anywhere in age from pre-teen to geriatric and cannot be expected to read Krebs On Security on a regular basis. I do not see the point of assigning them liability, when software vendors (e.g. and almost always i.e. Microsoft) seemingly have none. And what is the liability that corporations or government faces for their lack of security that lead to massive breaches? Clearly not enough to stop this from happening over and over again.
The battle between hackers and users is asymmetrical, like the battle against terrorists, and what the professor proposes is blaming the unlucky. Let’s assign liability to the vendors and corporations that allow these things to happen in the first place. That will eliminate the problem of “negligent" users.
Mark D. Silverschotz spoke up for senior citizens:
We don’t hold people liable for negligence unless there is a failure to meet a required standard of care and a consequent injury to one to whom that duty was owed. So, may we presume that Professor Wolff now seeks to define that care standard for all PC/Mac users? Good luck.
For we civilians aged 60 +/- 10, who have struggled with PCs and Macs for the better part of 30 years, who have never had meaningful training in any aspect of digital life, nor the benefit of growing up during the digital age (surrounded by peers in a junior high or high school or college), and who have, you know, been “working” at our “jobs,” the idea that we somehow are supposed to know what constitutes good “computer hygiene,” much less “poor,” is unrealistic.
And please don’t tell me just to Google it.
This is like politicians who tell laid off manufacturing workers in their 50s that they should “retrain” for technology jobs or tell laid-off middle managers that they should sign up for Code Academy. Professor Wolff says: “Very rarely do we grapple with the question of whether, perhaps, the only way to get individuals to take this seriously and actually change their behavior––to be more attentive to issues of security––is if there are concrete penalties and consequences associated with participating in bots, falling for phishing attacks, failing to install security updates, and other basics of computer hygiene.” (Emphasis added)
Now, I know what a “bot” is (assuming it’s a bit of software that generates automatic comments, emails, etc. that clog up cyberspace), but I have zero idea of how one “participates” or—more to the point—that “participating in bots” is even something that exists. I know what phishing is, but the phishers are getting increasingly sophisticated, no longer sending out random “Hi Phil” emails (hoping to reach an actual “Phil”), and are able to “spoof” humans and companies known to the target.
What level of care does Professor Wolff propose? Do I need to call everyone who sends me an email to make sure it’s legit?
Regarding the installation of security updates, here is where my teeth get set on edge. I have no idea how to “install a security update.” I have no idea how to install anything. I have never updated a browser, “changed” my browser, added a browser (meaning I’ve only used factory-installed versions of IE and Safari) and am the kind of person that is the target of “When you visit your parents at Thanksgiving update their browser” articles. And I seem to recall articles warning users to NOT “autoupdate” Adobe Acrobat because of flaws in certain updates that left users susceptible to hackers. Right? Damned if we do, damned if we don’t.
So, what Professor Wolff is saying is either: a) Hold liable all the old people who don’t understand what she’s talking about, or b) Create different standards of care for different people, based on their age and sophistication. I doubt she’s suggesting the latter.
In fairness, I think I do understand what she is suggesting. She wants to treat computers like cars. You own a car, you have a license, you cause an accident, you’re liable. You get too old to drive without hurting people? Tough luck. Don’t drive.
I would be much more sympathetic to her argument if there was a standard of instruction and a readily discernible formal, universal, comprehensible, and stable base of knowledge that was available. As in driver education.
But there isn’t, and won’t be.
Lastly, Gary Warner of the University of Alabama shares what his students thought about the proposal:
I am teaching a class (Practical Malware Analysis) to 32 computer science majors, and I also run a computer forensics research lab that investigates cybercrime and provides intelligence to law enforcement agencies. I encouraged some of the students to weigh in on this topic after class last week.
The majority of all phishing sites are compromised servers, where the negligence of the webmaster is often the reason behind a trivial “mass-defacement” tool adding content on their website that imitates a banking website or online file-sharing or email service. When the user gives up their userid and password to the criminal and then has their account used in a variety of ways (from loss of funds through direct withdrawal, sending of “trusted” spam to people in their address book, or elaborate corporate-targeted fraud such as the Business Email Compromise campaigns that have now netted $3.1 billion in losses), should the original webmaster be held negligent?
This form of negligence often also leads to hosting of malware infrastructure as well. Tens of thousands of websites have been compromised, primarily through webmasters neglecting to patch the vulnerable services running on their server, or through poor password choices, or through falling for a password-stealing phishing scam themselves, or having their personal workstation compromised with password stealing malware.
Some of these servers then have addresses advertised in spam email messages as important links to click, which lead to a series of files being downloaded and executed resulting in full control of the victim computer being granted to a cybercriminal. When these computers are internal to large networks, they can then be used as a foothold to create a stronghold from which large data breaches or theft of intellectual property may occur.
Some of the students pointed out that there are a plethora of security controls that would all have to fail for someone’s userid and password to successfully be used to transfer funds from a bank, and that it would be hard to place blame on their entire sequence of failed security controls only on the person whose website was breached as the starting point of the attack. They pointed out that the person who clicked on the link leading to their system being infected, or provided their userid and password to a strange website might be considered equally negligent.
Some of the students pointed out that many “webmasters” (and does that term really apply to someone hosting a $5.99 hobbyist or vanity website?) are relying on their hosting company to provide security and asked if the webmaster might be able to pass the liability to their hoster?
Others pointed out that some hacking victims were clearly “negligent” when measured against “industry standard security controls” when they do stupid things like use passwords such as “123456” or “password” but asked if they would not be cleared in the case of a “Zero Day Attack.” As an example of this, we discussed the DDOS attacks against the major banks that were largely carried out by very high-bandwidth servers that had been compromised through a Red Hat Enterprise “Zero Day”––meaning that every server of this type was vulnerable, but that the vendor had not yet released a patch. Such Zero Days have occurred in every major component of web hosting from the Operating Systems (Linux or Windows) to the web servers (Microsoft IIS and Apache), to the Content Management Systems and plugins (Wordpress and Joomla) that run on those servers. Given that, why would the webmaster be held responsible but not the creator of the vulnerable software itself?
So, the only conclusion that we were able to reach was that
- Because the average consumer has little understanding of security
- and the range of vulnerabilities stretches from “stupid passwords” to “extremely sophisticated zero day attacks”
- and a long chain of failed security controls would need to be exhibited in most types of attacks, each involving some negligence and some shared responsibility ...
Then... It would be nearly impossible to have a “one size fits all” ruling that consumers who are hosting websites should be held liable when their website is hacked and used for attacks. That may be an entirely unacceptable answer, but I am afraid that is the situation in which we find ourselves.
Thanks to Professor Wolff and all who responded to her proposal for the stimulating debate.