Saturday, May 27, 2017

Malware prophecy realized: WannaCry, NSA EternalBlue, CIA Athena, and more

You probably noticed massive news coverage of the recent outbreak of malicious code called WannaCryptor, WannaCry, Wcry, and other variations on that theme. In fact, WannaCry itself was a variation on a theme, the ransomware theme. WannaCry made so much noise because it added a powerful worm capability to the basic theme of secretly encrypting of your files and holding them for ransom. Plausible estimates of the cost of this malware outbreak to organizations and individuals range from $1 billion to $4 billion.

And all of which was made possible by something called the EternalBlue SMB exploit, computer code developed at the expense of US taxpayers, by the National Security Agency (NSA). Now that copies of this malicious code have been delivered to hundreds of thousands of information system operators in more than 100 countries around the world, it might be wise to ask: "how did this happen?"

How did this happen?

Unfortunately, I am not privy to any of the technical details about how this happened beyond those already published by my colleagues in the cybersecurity profession (there's a good collection of information on We Live Security, a site maintained by my employer, ESET). However, in practical terms I do know how this happened, and it goes like this:
  1. The NSA helps defend the US by gathering sensitive information. One way to do that is to install software on computers without the knowledge or permission of their owners. 
  2. Installing software on computers without the knowledge or permission of their owners has always been problematic, not least because it can have unexpected consequences as well as serve numerous criminal purposes, like stealing or destroying or ransoming information.
  3. Back in the 1980s there were numerous attempts to create self-replicating programs (computer worms and viruses) that inserted themselves on multiple computers without permission. Many of these caused damage and disruption even though that was not the intent of their designers.
  4. Programs designed to help computer owners block unauthorized code were soon developed. These programs were generically referred to as antivirus software although unauthorized code was eventually dubbed malware, short for malicious software.
  5. The term malware reflects the overwhelming consensus among people who have spent time trying to keep unauthorized code off systems that "the good virus" does not exist. In other words, unauthorized code has no redeeming qualities and all system owners have a right to protect against it.
  6. Despite this consensus among experts, which had grown even stronger in recent years due to the industrial scale at which malware is now exploited by criminals, the NSA persevered with its secret efforts to install software on computers without the knowledge or permission of their owners. 
  7. Because the folks developing such code thought of it as a good thing, the term "righteous malware" was coined (definition: software deployed with intent to perform an unauthorized process that will impact the confidentiality, integrity, or availability of an information system to the advantage of a party to a conflict or supporter of a cause).
  8. Eventually, folks who had warned that righteous malware could not be kept secret forever were proven correct: a whole lot it was leaked to the public, including EternalBlue.
  9. Criminals were quick to employ the "leaked" NSA code to increase the speed at which their malicious code spread, for example using EternalBlue to help deliver cryptocurrency mining malware as well as ransomware.
  10. Currently there are numerous other potentially dangerous taxpayer-funded malicious code exploits in the hands of US government agencies, including the CIA (for example, its Athena malware is capable of hijacking all versions of the Microsoft Windows operating system, from XP to Windows 10).
So that's how US government funded malware ends up messing up computers all around the world. There's nothing magical or mysterious about it, just a series of chancy decisions that were consciously made in spite of warnings that this could be the outcome.

Warning signs

One such warning was the paper about "righteous malware" that I presented to the 6th International Conference on Cyber Conflict (CyCon) organized by the NATO Cooperative Cyber Defence Center of Excellence or CCDCoE in Estonia. You can download the paper here. My co-author on the paper was Andrew Lee, CEO of ESET North America, and we have both spent time in the trenches fighting malicious code. We were well aware that antivirus researchers had made repeated public warnings about the risks of creating and deploying "good" malware.

One comprehensive warning was published back in 1994, by Vesselin Bontchev, then a research associate at the Virus Test Center of the University of Hamburg. His article titled "Are 'Good' Computer Viruses Still a Bad Idea?" contained a handy taxonomy of reasons why good viruses are a bad idea, based on input from numerous AV experts. Andrew and I put these into a handy table in our paper:

Technical Reasons
Lack of Control
Spread cannot be controlled, unpredictable results
Recognition Difficulty
Hard to allow good viruses while denying bad
Resource Wasting
Unintended consequences (typified by the “Morris Worm”)
Bug Containment
Difficulty of fixing bugs in code once released
Compatibility Problems
May not run when needed, or cause damage when run
Effectiveness
Risks of self-replicating code over conventional alternatives
Ethical and Legal Reasons
Unauthorized Data Modification
Unauthorized system access or data changes illegal or immoral
Copyright and Ownership Problems
Could impair support or violate copyright of regular programs
Possible Misuse
Code could be used by persons will malicious intent
Responsibility
Sets a bad example for persons with inferior skills, morals
Psychological Reasons
Trust Problems
Potential to undermine user trust in systems
Negative Common Meaning
Anything called a virus is doomed to be deemed bad

We derived a new table from this, one that accounted for both self-replicating code and inserted code, such as trojans. Our table presented the "righteous malware" problem as a series of questions that should be answered before such code is deployed:

Control
Can you control the actions of the code in all environments it may infect?
Detection
Can you guarantee that the code will complete its mission before detection?
Attribution
Can you guarantee that the code is deniable or claimable, as needed?
Legality
Will the code be illegal in any jurisdictions in which it is deployed?
Morality
Will deployment of the code violate treaties, codes, and other international norms?
Misuse
Can you guarantee that none of the code, or its techniques, strategies, design principles will be copied by adversaries, competing interests, or criminals
Attrition
Can you guarantee that deployment of the code, including knowledge of the deployment, will have no harmful effects on the trust that your citizens place in its government and institutions including electronic commerce.

Clearly, the focus of our paper was the risks of deploying righteous malware, but many of those same risks attach to the mere development of righteous malware. Consider one of the arguments we addressed from the "righteous malware" camp: "Don't worry, because if anything goes wrong nobody will know it was us that wrote and/or released the malware". Here is our response from that 2014 paper:
This assertion reflects a common misunderstanding of the attribution problem, which is defined as the difficulty of accurately attributing actions in cyber space. While it can be extremely difficult to trace an instance of malware or a network penetration back to its origins with a high degree of certainty, that does not mean “nobody will know it was us.” There are people who know who did it, most notably those who did it. If the world has learned one thing from the actions of Edward Snowden in 2013, it is that secrets about activities in cyber space are very hard to keep, particularly at scale, and especially if they pertain to actions not universally accepted as righteous.
We now have, in the form of WannaCry, further and more direct proof that those "secrets about activities in cyber space", the ones that are "very hard to keep", include malicious code developed for "righteous" purposes. And to those folks who argued that it was okay for the government to sponsor the development of such code because it would always remain under government control I say this: you were wrong. Furthermore, you will forever remain wrong. There is no way that the creators of malware can ever guarantee control over their creations. And we would be well advised to conduct all of our cybersecurity activities with that in mind.

No comments: