Saturday, February 16, 2019

Risk assessment and situational awareness: minding the gender gap

Consider this: a man and a woman get into an elevator.

Man and woman in elevator icon Which one is doing risk assessment, the man or the woman?

I've been posing this question to random groups of people on the fringes of information security and cyber-workforce events for about a year now and the results have been very interesting to say the least.

Almost without exception women respond by saying "the woman." I can honestly say that this is what I had predicted, but what surprised me was how quickly that response came, and how often women proceeded to share their personal elevator strategies (more on these later).

And I have yet to hear a woman say: "I've never really thought about it."

How do men answer? A lot of them say "the woman" and I take that as a positive sign; however, some hesitate before answering and a few seemed puzzled. And I've seen some interesting facial expressions when someone in mixed company answers "the woman" very quickly and decisively.

Socially sciency

Of course, me putting this question to random groups of people does not count as formal social science. The reactions that I get may be influenced by the uncontrolled demographics of the group (all male, all male, mixed). That said, I'd love to hear from anyone who is in a position to do a more formal study.

My primary motivation in asking this question was to get a quick sanity check on a hypothesis that I had formed while researching risk perception as it relates to technology (some of the results of that research are illustrated in the graph below - read more about the work here).


What this graph illustrates is the gender gap in technology-related risk perception. Numerous studies have documented this over the course of several decades (see the 1994 paper "Gender, race, and perception of environmental health risks" by Flynn, Slovic, and Mertz for early references: Risk Analysis, 14, pp. 1101-1108).

As far as I know, it was studies of public sentiment around environmental issues that led to the first documentation of a gender gap in technology-related risk perception. The research that I did with my colleague at ESET, Lysa Myers, was to the best of my knowledge the first to show that this gender gap also exists with respect to risks related to digital technologies. That finding led me to hypothesize that women - on average or in the aggregate - are more risk aware than men when it comes to technology.

A counter-argument might be that men are more realistic in their assessment of risk because the true level of risk is lower than women think and closer to the population mean. However, it is my opinion that many technology risks are higher than the mean, therefore I would argue that women are more accurate in their technology risk perception than men (on average or in the aggregate).

Research into the gender and ethnic variations in risk perception has shown that white males, as a whole, see less risk in technology than black males, white females, or black females (these were the names of the categories used by the researchers). But that score - which has been dubbed the white male effect - is the result of a subset of while males seeing drastically less risk than anybody else. The group, possibly 30% of white males, lowers the overall risk scores for all white males, creating the gap you see in this chart from the 1994 Flynn, Slovic, and Mertz study (adapted):
As I indicated earlier, this study was not an outlier, other studies point in the same direction and I am not aware of any that point in the opposite direction (I did look for them). You can find quite a few studies, as well as deep dives into why some people see less risk in technology than others, at the Cultural Cognition Project at Yale Law School.

What does it all mean? As I suggested in my TEDx talk a few years ago, I think it means that the rate at which new technology risks are created would go down if decision-making roles in tech companies were more evenly distributed between genders.

Back then I said "we need more women in decision-making roles" and some surveys suggest that there are now more women in such roles than there used to be; but I think we are nowhere near the level of gender equality needed to put the brakes on fresh technological blunders.

In the coming months and years I will continue to articulate these views. In the meantime, I have another study concept you might want to consider. Document what happens when you ask women this question: "What goes through your mind if you're alone in an elevator and a man gets on."

I think you will hear some interesting personal elevator strategies. The ones that I have heard certainly gave me a better sense of just how different life still is for women and men.

Thursday, January 24, 2019

How serious is the cybercrime problem in America?

The short answer to "how serious is the cybercrime problem in America?" is: Way more serious than our government seems to realize. That is one of the conclusions that can be drawn from recent ESET research into public attitudes to cybercrime, cybersecurity, and data privacy.

To check out the details, please visit this article I wrote at WeLiveSecurity, which is where you can download the full report. It has some pretty solid that may help us persuade policy makers to move cybercrime deterrence up the public policy agenda and make it the #1 priority that it should already be.

Frankly, as a student of criminology I was shocked to see that respondents thought cybercrime was a more important challenge than drug trafficking or money laundering. Almost equally worrying was the finding that less than half of Americans surveyed think that the authorities, including law enforcement, are doing enough to fight cybercrime.

So here is the conclusion that I wrote for the sruvey report: unless cybersecurity initiatives and cybercrime deterrence are made a top priority of government agencies and corporations, the rate at which systems and data are abused will continue to rise, further undermining the public’s trust in technology, trust that is vital to America’s economic well-being, now and in the future.

Please take a moment to share this information...thank you!

Sunday, August 12, 2018

What does threat cumulativity mean for the future of digital technology and cybersecurity

In recent years, most of my presentations about cybersecurity have included a slide titled "Security is cumulative". I made the slide when a group of business people asked if I would speak to them about cybersecurity. As usual, I said I would be delighted to do so, but it would help me to know what aspects of the subject they wanted me to address. The conversation continued like this:
  • Them: “You’ve been at this for a long time, right?” 
  • Me: “Yes, I guess I’ve been researching security for about 30 years.“ 
  • Them: “Well, why not talk about the top five or six things that you’ve learned.” 
Why not, indeed. The idea appealed to me and so I created a new slide deck to capture my thoughts and my first thought was this: security is cumulative. Beneath it I wrote words to this effect: To protect information systems and the data they process you have to anticipate and defend against new threats while also defeating old threats.

Ever since I wrote that, I have seen confirmation after confirmation that it is correct. Of course, there’s probably some confirmation bias at work, but consider these recent news stories
That is five examples in 10 days – July 26 to August 4, 2018 – five headlines that reflect the reality that “security is cumulative”. While many information security professionals have, over the years, stressed the need to learn from history, I think this aspect of cybersecurity, this need to defend against an accumulating list of threats, deserves a name, so I am suggesting this one: threat cumulativity.

Here is my proposed definition of threat cumulativity: the tendency of new technologies to spawn new threats that do not displace old threats but add to them.

Of course, there will be objections to this term, starting with "cumulativity is not a word" and "everybody knows this already." Well, cumulativity is a word, as I will explain in a moment. As for "everybody knows this already" let me be blunt: that is one of the most persistent errors in security thinking, kept alive by security experts who are out of touch with the relationship between technology and people.

To be clear, if you are a security expert, you probably do know that threats are cumulative. But there are a whole bunch of people whose work impacts security who have not internalized the implications of this phenomenon. I think that having a term to describe the phenomenon will help to spread awareness of its implications.

Another objection to "threat cumulativity is likely to be: "you mean risks, not threats, so you should be talking about risk cumulativity." This is a non-trivial point and so I am going to address it in a separate article. But I think there are good strategic reasons for using 'threat' here rather than 'risk'.

As for cumulativity, it is a term used in linguistic semantics to describe an expression (X) for which the following holds: "If X is true of both of a and b, then it is also true of the combination of a and b. Example: If two separate entities can be said to be "water", then combining them into one entity will yield more "water"." (Wikipedia)

Now, I am not an expert in linguistic semantics, but I do happen to have a decent degree in English Language and Literature. To my way of thinking, appropriating cumulativity for the security lexicon is a valid use of the word, one that can help people understand - and defend against - the phenomenon it purports to describe.

I will be writing more about threat cumulativity and furnishing examples of how it appears - to my eyes at least - to spell trouble for new technologies, some of which are the object of much hope for future prosperity.

Note: the illustration at the top of the article is from the works of Vauban, a pioneer in physical security, namely fortifications.

Sunday, May 06, 2018

Conversation starter for cybersecurity and workforce networking events

Elevator icon, created and released to public domain by Stephen Cobb
Suppose you work in cybersecurity and/or workforce development and you find yourself talking to other people in those fields, maybe during a networking break between conference sessions, or at one of those randomly seated lunches. Everyone has been introduced, said where they're from and for whom they work, but now there's a lull in the conversation. Try starting the conversation with this question:

A man and a woman get into an elevator; which one is doing risk assessment?

I have been asking strangers this question for a while now and the responses are very interesting. I don't want to tell you what they are at this point - that is a separate blog post. (I'm trying to devise a more formal study of responses from a range of audiences.)

But if you know me, or know of my research into gender and risk perception, you might be able to imagine where I'd like to see the conversation go after this icebreaker (places like a deeper understanding of how our sense of risk varies based on who we are and how our experiences in life have led us to differing levels of concern about potential threats to our wellbeing).

You might also want to ask this question outside of cybersecurity circles. Maybe in class? And you could change it up a little. For example, I have used "which one is more likely to be doing risk assessment?"

You could also ask this question on Twitter or Facebook (feel free to use the image above - frankly I think it's a daft sign, but I made it based on a real one that I saw recently in a very new office building in San Francisco).

Thursday, February 01, 2018

Cybersecurity and data privacy research: a modest eight piece portfolio

Research that I have done in cybersecurity and data privacy over the last few years has borne fruit in a number of different places so I wanted to provide a centralized reference point for eight of the main outputs. This should make it easier for folks to find them. I have annotated the items for context and relevance. (Note: I have formatted all the PDFs for Letter size paper but some of them use UK English spelling, others are US English.)

1. Code as a weapon

Document: Malware is Called Malicious for a Reason: The Risks of Weaponizing Code (PDF)

History: Published in the 6th International Conference on Cyber Conflict (CyCon) Proceedings, P. Brangetto, M. Maybaum, J. Stinissen (Eds.) IEEE, 2014.

Context:  I worked with my friend and colleague Andrew Lee, who was then CEO of ESET North America, to articulate several arguments against using code as a weapon. In the world of companies and consumers, program code that you run on someone else's system without permission is typically referred to as malicious software or malware. A single "infection" can cost a single enterprise hundreds of millions of dollars worth of damage (as in the WannaCry and NotPetya attacks of 2017, which used code developed by the NSA). We argued that the development of "righteous malware" by the military and intel communities, a process sometimes referred to as weaponizing code, has proceeded with insufficient input from the people who defend against, and clean up after, real world malware attacks. The consensus of this community is that military deployment of malicious code is at best a very risky proposition.

(While I was delighted that the paper was accepted for publication, and enjoyed traveling to NATO's Cycon event in Estonia in May of 2014 to present it, one of the reviewer's comments - "not very academic" - stung a little. Consequently, in August of 2014 I enrolled in a Master of Science program at the University of Leicester in England.)

2. Cybercrime and criminology

Document: The main problem with Situational Crime Prevention is that it fails to address the root causes of crime: a critical discussion

History: This 4,000 word essay, which includes an extensive reference list, was the first piece of work that I produced for my MSc in the Department of Criminology at the University of Leicester.

Context:  The essay received a good grade and writing it required me to think hard about some of the fundamental issues in criminology. Presented in the traditional English academic essay format, a proposition is argued for and against. In this case, the idea of practical crime prevention is set against the need to understand and address crime's root causes. My argument was framed in the context of cybercrime, aspects of which - such as attribution, scale, and geography - challenge tradition approaches to crime reduction. Of particular value to my evolving analysis of cybercrime was the early work on Routine Activity Theory performed by Felson and Cohen. Way back in 1979 they warned that: "the opportunity for predatory crime appears to be enmeshed in the opportunity structure for legitimate activities".

3. Measuring cybercrime

Document: Sizing cybercrime: incidents and accidents, hints and allegations

History: Paper selected for publication and presentation at Virus Bulletin, 2015. There is actually a video of the presentation that you can watch here.

Context: Just as defense of an information system means you first need to map and measure it, we need to know the scope and scale of cybercrime before we can effectively fight it. In many countries, the government tracks the number of murders, cars thefts, bank robberies, and other crimes. This data helps inform budgeting and resource allocation while enabling the measurement of efforts to reduce crime. Unfortunately, few countries, if any, have been tracking cybercrime. I argue that this abdication of governmental responsibility severely hampers efforts to fight cybercrime and do the work of cybersecurity. In the US, the federal government now directs inquiries about the level of cybercrime towards surveys performed by commercial organizations that have a vested interest in selling security-related products and services. My review of the literature and the surveys themsleves shows that many lack academic rigor and all are open to claims of bias.

4. Cyber futures and diversity: a TEDx talk

Document: Ones and Zeroes: a tale of two futures (video)

History: Talk given at TEDx San Diego, 2015, in which I drew on three things I learned while studying criminology, plus the inspiring young women of Securing Our eCity's Cyber Boot Camp.

Context: The organizers invited speakers to look to the future. I suggested that the future looks bleak if we don't step up our game in the realm of cybersecurity. I referenced crime deterrence and sentencing, Routine Activity Theory, Cultural Theory of Risk Perception, and White Male Effect. I ended by arguing that security would improve if we increased diversity in decision-making roles in technology companies.

5. The cybersecurity skills gap

Document: Mind this Gap: cybercime and the cybersecurity skills gap

History: Paper selected for publication and presentation at Virus Bulletin, 2015.

Context: As I looked more closely at the growth in cybercrime the more it became apparent that organizations were having great difficulty staffing cybersecurity positions.

6. Data privacy versus data protection in the US

Document:  Data privacy and data protection: US law and legislation

History: This white paper is based on an essay I wrote for my MSc in Security and Risk Management.

Context: As an essay, the document did not receive a great grade (it was deemed "not argumentative enough"). However, the underlying research was sound and, when formatted as a white paper, it has proved to be very useful for anyone trying to understand the American approach to data privacy in general, and more specifically, how this differs from the European notion of data protection, as embodied in the EU's General Data Protection Regulation or GDPR.

7. What it takes to be an effective CISO

Document: Getting to know CISOs: Challenging assumptions about closing the cybersecurity skills gap.

History: This is my MSc dissertation, all 18,000 words and 84 pages of it.

Context: From the abstract: "Pervasive criminal abuse of information and communication technologies has increased the demand for people who can take on the task of securing organizations against the increasing scope and scale of threats. With demand for these cybersecurity professionals growing faster than the supply, a problematic “cybersecurity skills gap” threatens the ability of organizations to adequately protect the information systems upon which they, and society at large, are now heavily reliant. This dissertation focuses on one barrier to closing the cybersecurity skills gap: the current paucity of knowledge about key work roles within the cybersecurity workforce – such as Chief Information Security Officer or CISO – and questionable assumptions about what it takes to perform such roles effectively."

8. Risk perception in cyber: a gendered perspective

Document: Adventures in cybersecurity research: risk, cultural theory, and the white male effect

History: A two-part article, published online, to present the results of the first ever survey of cyber risks relative to gender, ethnicity, and non-cyber hazards.

Context: If you are an information security professional, chances are you will have spent a fair amount of time and effort trying to get people and organizations to do more to protect their computers and data from abuse; and you will know that not everyone takes the risks of digital technology as seriously as you do. I asked myself why some people don't listen to experts, and why some people see less risk than others. Aided by my ESET research colleague, Lysa Myers, a survey was conducted to measure the white male effect and related phenomena. Along the way we found that criminal hacking is now perceived as a serious risk to health and prosperity by a significant section of the population.

Thursday, December 21, 2017

Cybersecurity, risk perceptions, predictions and trends for 2018

A quick update on research into Americans' perception of risks related to digital technology, as well as some predictions for cybersecurity in 2018.

Risk perception and cybersecurity

Over the summer I conducted some research with my ESET colleague (@LysaMyers) on the topic of risk perception as it relates to hazards arising from the use of digital technologies, which can be termed "cyber risks" for short. Our goal was to better understand why different people see different levels of risk in a range of hazards, and why some people listen to experts when it comes to levels of risk, but others do not.

For the past few months we have been analyzing and reporting on this work. Several of our findings proved newsworthy, like the extent to which concerns about criminal hacking has permeated American culture. This was the subject of an ESET press release.

We also documented evidence of a phenomenon that others have dubbed the "White Male Effect" in risk perception. First documented in 1994 with respect to a range of hazards, you can see in in our 2017 survey results here:


You can see more results of our research in several formats, from long to short:
For background on the cultural theory of risk perception that we used in our research, I encourage you to check out Dan Kahan's papers at the Cultural Cognition Project at Yale Law School. Prof. Kahan was very helpful to us as we designed our survey instrument (which is available to anyone who would like to repeat the survey).

Cybersecurity trends and predictions

As usual, I participated in ESET's annual review of security trends, this year contributing a chapter on critical infrastructure hacks, new malware for which was discovered by my colleagues. The Trends report is available here: https://www.welivesecurity.com/2017/12/14/cybersecurity-trends-2018-the-costs-of-connection/

Another annual ritual is my predictions webinar. A full recording of the December 2017 webinar that looks ahead to 2018 is available to watch on demand. Access is gated, but I think it is worth registering and should not result in a bunch of spam. Here is the agenda, click to access:


Note that regulatory risks was the top theme. And the regulation that tops them all is GDPR, the General Data Protection Regulation that comes into effect in May of 2018. I wrote about GDPR several times this year. In fact, the following article was my most widely read contribution to WeLiveSecurity in 2017: https://www.welivesecurity.com/2017/05/23/gdpr-is-world-ready-cybersecurity-impact/

Here's to all of us enjoying a safer year in 2018!

Saturday, September 09, 2017

Steps to take now Equifax breach has affected almost half of adults in US

The Equifax security breach, in which "identity theft enabling data" was stolen from a company that sells identity theft protection products, may well surpass the Target breach as one of the most impactful ever, at least from a consumer perspective.

As Lysa Myers, my ESET colleague, has noted this breach appears to have occurred between mid-May and July. It was discovered by Equifax on July 29 and the scale is staggering: 143 million people affected, almost half of all adults in the US!

For those wondering how to identify or mitigate problems caused by this breach, Lysa has some good advice. Unfortunately, the response from Equifax has not been exemplary and there are concerns that it might be trying to restrict consumer rights of redress as part of its "help" process (see this Atlantic article and the update below).

For those wondering how such a thing could happen, I suggest "stay tuned" to your favorite cybersecurity news feeds. We have some information already (Equifax may have fallen behind in applying security updates to its Internet-facing Web applications). However, I am sure there will be more details to come.

In the meantime, I leave you with this weird fact: A share of Equifax (EFX) stock was worth about $143 before the breach, which affected 143 million people. It dropped dramatically after news of the breach broke, closing on Friday at $123. That's a drop of more than 13%. Yet all the indications are that preventing the breach sounds could have been as easy as, you guessed it: 1-2-3.

Update: Thanks to Brian Krebs for flagging the change Equifax that made to its breach alert page. This makes it clear that "the arbitration clause and class action waiver included in the Equifax and TrustedID Premier terms of use does not apply to this cybersecurity incident."

I am providing the address of the breach alert page below, but stress that you use it at your own risk. The fact that I feel compelled to say that is a reflection of how badly, in my opinion, Equifax has been handling the breach response so far: https://www.equifaxsecurity2017.com/

Sunday, July 09, 2017

US-Russia cybersecurity talks: right script, wrong actors?

Should the US and Russia hold talks on cybersecurity? A lot of people are shouting "No!" and I think I understand why, but in my opinion that's the wrong answer, albeit for the right reasons. Just consider these two propositions:

A. The US and Russia should, bilaterally and globally, seek ways to deter cybercrime and reduce cyberconflict.

B. President Trump and President Putin should, bilaterally and globally, seek ways to deter cybercrime and reduce cyber-conflict.

I would argue that A is not only a good idea but has an aura of historical inevitability, while B is a very disquieting prospect. Why? Because I don't think the Trump administration understands how diplomatic negotiation works, not to mention the fact that Trump himself has openly disparaged many of the very people whose expertise and cooperation is needed to protect US interests during such negotiations.

In other words, I believe the US and Russia, and every other country, must work together to deter cybercrime and reduce cyber-conflict. That is the right script. That is the direction the world will take, if not now, then at some point in the future. But Trump and Putin are the wrong actors for this script; both lack the levels of credibility and legitimacy required to make meaningful progress.

"Good luck with that"

Of course, I am accustomed to hearing "Good luck with that" and "Ain't gonna happen" when I say to people "international cooperation and global treaties are the only way to make a serious dent in cybercrime and cyberconflict." But history tells me I am right, even if doesn't tell me how old I will be when that eventually proves to be true.

Saturday, May 27, 2017

Malware prophecy realized: WannaCry, NSA EternalBlue, CIA Athena, and more

You probably noticed massive news coverage of the recent outbreak of malicious code called WannaCryptor, WannaCry, Wcry, and other variations on that theme. In fact, WannaCry itself was a variation on a theme, the ransomware theme. WannaCry made so much noise because it added a powerful worm capability to the basic theme of secretly encrypting of your files and holding them for ransom. Plausible estimates of the cost of this malware outbreak to organizations and individuals range from $1 billion to $4 billion.

And all of which was made possible by something called the EternalBlue SMB exploit, computer code developed at the expense of US taxpayers, by the National Security Agency (NSA). Now that copies of this malicious code have been delivered to hundreds of thousands of information system operators in more than 100 countries around the world, it might be wise to ask: "how did this happen?"

How did this happen?

Unfortunately, I am not privy to any of the technical details about how this happened beyond those already published by my colleagues in the cybersecurity profession (there's a good collection of information on We Live Security, a site maintained by my employer, ESET). However, in practical terms I do know how this happened, and it goes like this:
  1. The NSA helps defend the US by gathering sensitive information. One way to do that is to install software on computers without the knowledge or permission of their owners. 
  2. Installing software on computers without the knowledge or permission of their owners has always been problematic, not least because it can have unexpected consequences as well as serve numerous criminal purposes, like stealing or destroying or ransoming information.
  3. Back in the 1980s there were numerous attempts to create self-replicating programs (computer worms and viruses) that inserted themselves on multiple computers without permission. Many of these caused damage and disruption even though that was not the intent of their designers.
  4. Programs designed to help computer owners block unauthorized code were soon developed. These programs were generically referred to as antivirus software although unauthorized code was eventually dubbed malware, short for malicious software.
  5. The term malware reflects the overwhelming consensus among people who have spent time trying to keep unauthorized code off systems that "the good virus" does not exist. In other words, unauthorized code has no redeeming qualities and all system owners have a right to protect against it.
  6. Despite this consensus among experts, which had grown even stronger in recent years due to the industrial scale at which malware is now exploited by criminals, the NSA persevered with its secret efforts to install software on computers without the knowledge or permission of their owners. 
  7. Because the folks developing such code thought of it as a good thing, the term "righteous malware" was coined (definition: software deployed with intent to perform an unauthorized process that will impact the confidentiality, integrity, or availability of an information system to the advantage of a party to a conflict or supporter of a cause).
  8. Eventually, folks who had warned that righteous malware could not be kept secret forever were proven correct: a whole lot it was leaked to the public, including EternalBlue.
  9. Criminals were quick to employ the "leaked" NSA code to increase the speed at which their malicious code spread, for example using EternalBlue to help deliver cryptocurrency mining malware as well as ransomware.
  10. Currently there are numerous other potentially dangerous taxpayer-funded malicious code exploits in the hands of US government agencies, including the CIA (for example, its Athena malware is capable of hijacking all versions of the Microsoft Windows operating system, from XP to Windows 10).
So that's how US government funded malware ends up messing up computers all around the world. There's nothing magical or mysterious about it, just a series of chancy decisions that were consciously made in spite of warnings that this could be the outcome.

Warning signs

One such warning was the paper about "righteous malware" that I presented to the 6th International Conference on Cyber Conflict (CyCon) organized by the NATO Cooperative Cyber Defence Center of Excellence or CCDCoE in Estonia. You can download the paper here. My co-author on the paper was Andrew Lee, CEO of ESET North America, and we have both spent time in the trenches fighting malicious code. We were well aware that antivirus researchers had made repeated public warnings about the risks of creating and deploying "good" malware.

One comprehensive warning was published back in 1994, by Vesselin Bontchev, then a research associate at the Virus Test Center of the University of Hamburg. His article titled "Are 'Good' Computer Viruses Still a Bad Idea?" contained a handy taxonomy of reasons why good viruses are a bad idea, based on input from numerous AV experts. Andrew and I put these into a handy table in our paper:

Technical Reasons
Lack of Control
Spread cannot be controlled, unpredictable results
Recognition Difficulty
Hard to allow good viruses while denying bad
Resource Wasting
Unintended consequences (typified by the “Morris Worm”)
Bug Containment
Difficulty of fixing bugs in code once released
Compatibility Problems
May not run when needed, or cause damage when run
Effectiveness
Risks of self-replicating code over conventional alternatives
Ethical and Legal Reasons
Unauthorized Data Modification
Unauthorized system access or data changes illegal or immoral
Copyright and Ownership Problems
Could impair support or violate copyright of regular programs
Possible Misuse
Code could be used by persons will malicious intent
Responsibility
Sets a bad example for persons with inferior skills, morals
Psychological Reasons
Trust Problems
Potential to undermine user trust in systems
Negative Common Meaning
Anything called a virus is doomed to be deemed bad

We derived a new table from this, one that accounted for both self-replicating code and inserted code, such as trojans. Our table presented the "righteous malware" problem as a series of questions that should be answered before such code is deployed:

Control
Can you control the actions of the code in all environments it may infect?
Detection
Can you guarantee that the code will complete its mission before detection?
Attribution
Can you guarantee that the code is deniable or claimable, as needed?
Legality
Will the code be illegal in any jurisdictions in which it is deployed?
Morality
Will deployment of the code violate treaties, codes, and other international norms?
Misuse
Can you guarantee that none of the code, or its techniques, strategies, design principles will be copied by adversaries, competing interests, or criminals
Attrition
Can you guarantee that deployment of the code, including knowledge of the deployment, will have no harmful effects on the trust that your citizens place in its government and institutions including electronic commerce.

Clearly, the focus of our paper was the risks of deploying righteous malware, but many of those same risks attach to the mere development of righteous malware. Consider one of the arguments we addressed from the "righteous malware" camp: "Don't worry, because if anything goes wrong nobody will know it was us that wrote and/or released the malware". Here is our response from that 2014 paper:
This assertion reflects a common misunderstanding of the attribution problem, which is defined as the difficulty of accurately attributing actions in cyber space. While it can be extremely difficult to trace an instance of malware or a network penetration back to its origins with a high degree of certainty, that does not mean “nobody will know it was us.” There are people who know who did it, most notably those who did it. If the world has learned one thing from the actions of Edward Snowden in 2013, it is that secrets about activities in cyber space are very hard to keep, particularly at scale, and especially if they pertain to actions not universally accepted as righteous.
We now have, in the form of WannaCry, further and more direct proof that those "secrets about activities in cyber space", the ones that are "very hard to keep", include malicious code developed for "righteous" purposes. And to those folks who argued that it was okay for the government to sponsor the development of such code because it would always remain under government control I say this: you were wrong. Furthermore, you will forever remain wrong. There is no way that the creators of malware can ever guarantee control over their creations. And we would be well advised to conduct all of our cybersecurity activities with that in mind.

Monday, May 15, 2017

WannaCry ransomware: mayhem, money, scenarios, hypotheses, and implications

I think my ESET colleague Michael Aguilar had the best opening for an article on Friday's epic WannaCry ransomware outbreak:
"That escalated quickly! For those of you who did not read any news on Friday (or had your heads in the sand), you need to know that a massive tidal wave of malware just struck Planet Earth, creating gigantic waves in the information security sphere and even bigger waves for the victims." (We Live Security).
The English language version of the message WannaCry presents to victims 

And in the days since Friday you may have been caught up in the waves of breathless WannaCry reporting, tracking, analysis, advice (hashtag #wannacry). All of which was soon followed by the first rounds of finger-pointing and victim-blaming. I am hoping to write more about the latter shortly, but for now I want to speculate about "what the heck happened" as they say in Fargo-land.

Never speculate?

As a rule, cybersecurity professionals refuse to speculate in public about what the heck is going on in any given data breach of malicious code scenario. And by speculate I mean suggest explanations for the events at hand that go beyond the facts on hand. You especially don't want to point fingers at perpetrators unless you have an abundance of corroborating evidence (including some that is not digital, meaning "more than just code analysis").

All that said, it is helpful, and entirely justified IMPO, to consider, in the abstract, possible explanations for an exceptional course of events such as we have just witnessed with WannaCry.

1. It was just a money-making play: WannaCry presented itself as ransomware, the goal of which is to make money by charging people for the key to their data, which you have just encrypted without their permission. You can make a lot of money with ransomware, but it works best if you roll out your campaign in a controlled fashion, one that allows you to keep up with payments and key requests and customer service calls (yes, that is a thing; for example, many victims need help figuring out Bitcoin, the preferred method of ransom payment).

2. It was a narcissistic idiot play: Somebody figured they could create better ransomware than anyone else but didn't anticipate all of the implications of adding the NSA's eternalblue SMB exploit to a standard phishing based ransomware program (and yes, that link takes any person with an internet connection to the eternalblue code repository - which IMPO is crazy, but that's another article).

3. It was an intentional mayhem play: WannaCry spread faster than any sane cybercriminal would intentionally spread a ransomware campaign. So maybe the idea was to cause mayhem. When ESET holds its annual Cyber Boot Camp my colleague Cameron will award "mayhem points" to students who come up with a particularly imaginative way of causing chaos on the test network, but we conduct that camp under tightly controlled conditions in a secured facility. Nation states or their surrogates may feel inclined to conduct mayhem in the real world, as a distraction, to send a message, or even to undermine consumer confidence in technologies to which some countries do not yet have access, and so on.

4. It was a revenge play: What better way to show you are pissed off at the US government in general and the NSA in particular than to wreak global cyber-havoc with malware that is openly enabled by code developed by the NSA, leaked code that the NSA refuse to barter for.

So what are the implications?

Each of these four scenarios seems plausible to me, but of course I'm going to refuse to speculate as to whether one is more plausible than another. What I will assert is that pondering the scenarios may help investigators consider the full range of possibilities as they seek to identify the perpetrators.

For more background on WannaCry and what you should be doing to protect your IT systems against it, see Michael's original We Live Security article. There is also a follow up article on We Live Security and more to come, so be sure to sign up for the email alerts.

If you are interested in thinking more about what it means for government agencies to handle malware, consider this article and attached peer reviewed paper, presented at the NATO conference on cyber conflict.