WannaCry: Sometimes you can blame the victims

Credit to Author: Ira Winkler| Date: Tue, 16 May 2017 05:46:00 -0700

The WannaCry ransomware attack has created at least tens of millions of dollars of damage, taken down hospitals, and as of the time of this writing, another round of attacks is considered imminent as people show up to work after the weekend. Of course, the perpetrators of the malware are to blame for all the damage and suffering that has resulted. It’s not right to blame the victims of a crime, right?

Well, actually, there are cases when victims have to shoulder a portion of the blame. They may not be criminally liable as accomplices in their own victimhood, but ask any insurance adjuster whether a person or institution has a responsibility to take adequate precautions against actions that are fairly predictable. A bank that leaves bags of cash on the sidewalk overnight instead of in a vault is going to have a hard time getting indemnified if those bags go missing.

I should clarify that in a case such as WannaCry, there are two levels of victims. Take the U.K.’s National Health Service, for example. It was badly victimized, but the real sufferers, who are indeed blameless, are its patients. The NHS itself carries some blame.

WannaCry is a worm introduced into its victims’ systems via a phishing message. If a system’s user clicks on the phishing message and that system has not been properly patched, the system becomes infected, and if the system has not been isolated, the malware will seek out other vulnerable systems to infect. Being ransomware, the nature of the infection is for the system to be encrypted so that it’s basically unusable until a ransom is paid and the system is decrypted.

Here’s a key fact to consider: Microsoft issued a patch for the vulnerability that WannaCry exploits two months ago. Systems to which that patch had been applied did not fall victim to the attack. Decisions, had to be made, or not made, to keep that patch off systems that ended up compromised.

The security practitioner apologists who say you should not blame organizations and individuals for being hit try to explain away those decisions. In some cases, the systems that were hit were medical devices whose vendors will withdraw support if the systems are updated. In other cases, the vendors are out of business, and if an update causes the system to stop working, it would be useless. And some applications are so critical that there can be absolutely no downtime, and patches do require at least a reboot. Besides all that, patches have to be tested, and that can be expensive and time-consuming. Two months just isn’t enough time.

These are all specious arguments.

Let’s start with the claim that these were critical systems that couldn’t be shut down for patching. I’m sure some of them were indeed critical, but we’re talking about something like 200,000 affected systems. All of them were critical? It doesn’t seem likely. But even if they were, how do you argue that avoiding planned downtime is better than opening yourself up to the very real risk of unplanned downtime of unknown duration?And this very real risk is widely recognized at this point. The potential for damage from wormlike viruses has been well established. Code Red, Nimda, Blaster, Slammer, Conficker and others have caused billions of dollars of damage. All of these attacks targeted unpatched systems. Organizations cannot claim that they did not know the risk they were taking by not patching systems.

But let’s say some systems really couldn’t be patched, or needed more time. There are other ways to mitigate the risk, also referred to as compensating controls. For example, you can isolate vulnerable systems from other parts of the network or implement whitelisting (which limits programs that can run on a computer).

The real issues are budget and underfunded and undervalued security programs. I doubt that there was a single unpatched system that would have been left unprotected if security programs had been allocated the appropriate budget. With enough funding, patches could have been tested and deployed, and incompatible systems could have been replaced. At the very least, next-generation anti-malware tools such as Webroot, Crowdstrike and Cylance that were able to detect and stop WannaCry infections proactively could have been deployed.

So I see several scenarios for blame. If security and network teams never considered the well-known risks associated with unpatched systems, they are to blame. If they did consider the risk but its recommended solutions were rejected by management, management is to blame. And if management’s hands were tied because its budget is controlled by politicians, the politicians get a share of the blame.

But there’s plenty of blame to go around. Hospitals are regulated and have regular audits, so we can blame the auditors for not citing failures to patch systems or to have other compensating controls in place.

Managers and budget appropriators that undervalue the security function have to understand that, when they make a business decision to save money, they are assuming risk. In the case of hospitals, would they ever decide that they just don’t have the money to properly maintain their defibrillators? It’s unimaginable. But they seem to be blind to the fact that properly functioning computers are also critical. Most of the WannaCry infections were the result of the people responsible for those computers simply not patching them as part of a systematic practice, without any justification. If they considered the danger, they apparently chose not to implement compensating controls as well. It all potentially adds up to negligent security practices.

As I write in Advanced Persistent Security, there is nothing wrong with making a decision to not mitigate a vulnerability if that decision is based upon a reasonable consideration of the potential risk. In the case of decisions to not properly patch systems or implement compensating controls, though, we have more than a decade of wake-up calls to demonstrate the potential for loss. Unfortunately, too many organizations apparently hit the snooze button.

http://www.computerworld.com/category/security/index.rss