Thursday, December 21, 2017

Cybersecurity, risk perceptions, predictions and trends for 2018

A quick update on research into Americans' perception of risks related to digital technology, as well as some predictions for cybersecurity in 2018.

Risk perception and cybersecurity

Over the summer I conducted some research with my ESET colleague (@LysaMyers) on the topic of risk perception as it relates to hazards arising from the use of digital technologies, which can be termed "cyber risks" for short. Our goal was to better understand why different people see different levels of risk in a range of hazards, and why some people listen to experts when it comes to levels of risk, but others do not.

For the past few months we have been analyzing and reporting on this work. Several of our findings proved newsworthy, like the extent to which concerns about criminal hacking has permeated American culture. This was the subject of an ESET press release.

We also documented evidence of a phenomenon that others have dubbed the "White Male Effect" in risk perception. First documented in 1994 with respect to a range of hazards, you can see in in our 2017 survey results here:


You can see more results of our research in several formats, from long to short:
For background on the cultural theory of risk perception that we used in our research, I encourage you to check out Dan Kahan's papers at the Cultural Cognition Project at Yale Law School. Prof. Kahan was very helpful to us as we designed our survey instrument (which is available to anyone who would like to repeat the survey).

Cybersecurity trends and predictions

As usual, I participated in ESET's annual review of security trends, this year contributing a chapter on critical infrastructure hacks, new malware for which was discovered by my colleagues. The Trends report is available here: https://www.welivesecurity.com/2017/12/14/cybersecurity-trends-2018-the-costs-of-connection/

Another annual ritual is my predictions webinar. A full recording of the December 2017 webinar that looks ahead to 2018 is available to watch on demand. Access is gated, but I think it is worth registering and should not result in a bunch of spam. Here is the agenda, click to access:


Note that regulatory risks was the top theme. And the regulation that tops them all is GDPR, the General Data Protection Regulation that comes into effect in May of 2018. I wrote about GDPR several times this year. In fact, the following article was my most widely read contribution to WeLiveSecurity in 2017: https://www.welivesecurity.com/2017/05/23/gdpr-is-world-ready-cybersecurity-impact/

Here's to all of us enjoying a safer year in 2018!

Saturday, September 09, 2017

Steps to take now Equifax breach has affected almost half of adults in US

The Equifax security breach, in which "identity theft enabling data" was stolen from a company that sells identity theft protection products, may well surpass the Target breach as one of the most impactful ever, at least from a consumer perspective.

As Lysa Myers, my ESET colleague, has noted this breach appears to have occurred between mid-May and July. It was discovered by Equifax on July 29 and the scale is staggering: 143 million people affected, almost half of all adults in the US!

For those wondering how to identify or mitigate problems caused by this breach, Lysa has some good advice. Unfortunately, the response from Equifax has not been exemplary and there are concerns that it might be trying to restrict consumer rights of redress as part of its "help" process (see this Atlantic article and the update below).

For those wondering how such a thing could happen, I suggest "stay tuned" to your favorite cybersecurity news feeds. We have some information already (Equifax may have fallen behind in applying security updates to its Internet-facing Web applications). However, I am sure there will be more details to come.

In the meantime, I leave you with this weird fact: A share of Equifax (EFX) stock was worth about $143 before the breach, which affected 143 million people. It dropped dramatically after news of the breach broke, closing on Friday at $123. That's a drop of more than 13%. Yet all the indications are that preventing the breach sounds could have been as easy as, you guessed it: 1-2-3.

Update: Thanks to Brian Krebs for flagging the change Equifax that made to its breach alert page. This makes it clear that "the arbitration clause and class action waiver included in the Equifax and TrustedID Premier terms of use does not apply to this cybersecurity incident."

I am providing the address of the breach alert page below, but stress that you use it at your own risk. The fact that I feel compelled to say that is a reflection of how badly, in my opinion, Equifax has been handling the breach response so far: https://www.equifaxsecurity2017.com/

Sunday, July 09, 2017

US-Russia cybersecurity talks: right script, wrong actors?

Should the US and Russia hold talks on cybersecurity? A lot of people are shouting "No!" and I think I understand why, but in my opinion that's the wrong answer, albeit for the right reasons. Just consider these two propositions:

A. The US and Russia should, bilaterally and globally, seek ways to deter cybercrime and reduce cyberconflict.

B. President Trump and President Putin should, bilaterally and globally, seek ways to deter cybercrime and reduce cyber-conflict.

I would argue that A is not only a good idea but has an aura of historical inevitability, while B is a very disquieting prospect. Why? Because I don't think the Trump administration understands how diplomatic negotiation works, not to mention the fact that Trump himself has openly disparaged many of the very people whose expertise and cooperation is needed to protect US interests during such negotiations.

In other words, I believe the US and Russia, and every other country, must work together to deter cybercrime and reduce cyber-conflict. That is the right script. That is the direction the world will take, if not now, then at some point in the future. But Trump and Putin are the wrong actors for this script; both lack the levels of credibility and legitimacy required to make meaningful progress.

"Good luck with that"

Of course, I am accustomed to hearing "Good luck with that" and "Ain't gonna happen" when I say to people "international cooperation and global treaties are the only way to make a serious dent in cybercrime and cyberconflict." But history tells me I am right, even if doesn't tell me how old I will be when that eventually proves to be true.

Saturday, May 27, 2017

Malware prophecy realized: WannaCry, NSA EternalBlue, CIA Athena, and more

You probably noticed massive news coverage of the recent outbreak of malicious code called WannaCryptor, WannaCry, Wcry, and other variations on that theme. In fact, WannaCry itself was a variation on a theme, the ransomware theme. WannaCry made so much noise because it added a powerful worm capability to the basic theme of secretly encrypting of your files and holding them for ransom. Plausible estimates of the cost of this malware outbreak to organizations and individuals range from $1 billion to $4 billion.

And all of which was made possible by something called the EternalBlue SMB exploit, computer code developed at the expense of US taxpayers, by the National Security Agency (NSA). Now that copies of this malicious code have been delivered to hundreds of thousands of information system operators in more than 100 countries around the world, it might be wise to ask: "how did this happen?"

How did this happen?

Unfortunately, I am not privy to any of the technical details about how this happened beyond those already published by my colleagues in the cybersecurity profession (there's a good collection of information on We Live Security, a site maintained by my employer, ESET). However, in practical terms I do know how this happened, and it goes like this:
  1. The NSA helps defend the US by gathering sensitive information. One way to do that is to install software on computers without the knowledge or permission of their owners. 
  2. Installing software on computers without the knowledge or permission of their owners has always been problematic, not least because it can have unexpected consequences as well as serve numerous criminal purposes, like stealing or destroying or ransoming information.
  3. Back in the 1980s there were numerous attempts to create self-replicating programs (computer worms and viruses) that inserted themselves on multiple computers without permission. Many of these caused damage and disruption even though that was not the intent of their designers.
  4. Programs designed to help computer owners block unauthorized code were soon developed. These programs were generically referred to as antivirus software although unauthorized code was eventually dubbed malware, short for malicious software.
  5. The term malware reflects the overwhelming consensus among people who have spent time trying to keep unauthorized code off systems that "the good virus" does not exist. In other words, unauthorized code has no redeeming qualities and all system owners have a right to protect against it.
  6. Despite this consensus among experts, which had grown even stronger in recent years due to the industrial scale at which malware is now exploited by criminals, the NSA persevered with its secret efforts to install software on computers without the knowledge or permission of their owners. 
  7. Because the folks developing such code thought of it as a good thing, the term "righteous malware" was coined (definition: software deployed with intent to perform an unauthorized process that will impact the confidentiality, integrity, or availability of an information system to the advantage of a party to a conflict or supporter of a cause).
  8. Eventually, folks who had warned that righteous malware could not be kept secret forever were proven correct: a whole lot it was leaked to the public, including EternalBlue.
  9. Criminals were quick to employ the "leaked" NSA code to increase the speed at which their malicious code spread, for example using EternalBlue to help deliver cryptocurrency mining malware as well as ransomware.
  10. Currently there are numerous other potentially dangerous taxpayer-funded malicious code exploits in the hands of US government agencies, including the CIA (for example, its Athena malware is capable of hijacking all versions of the Microsoft Windows operating system, from XP to Windows 10).
So that's how US government funded malware ends up messing up computers all around the world. There's nothing magical or mysterious about it, just a series of chancy decisions that were consciously made in spite of warnings that this could be the outcome.

Warning signs

One such warning was the paper about "righteous malware" that I presented to the 6th International Conference on Cyber Conflict (CyCon) organized by the NATO Cooperative Cyber Defence Center of Excellence or CCDCoE in Estonia. You can download the paper here. My co-author on the paper was Andrew Lee, CEO of ESET North America, and we have both spent time in the trenches fighting malicious code. We were well aware that antivirus researchers had made repeated public warnings about the risks of creating and deploying "good" malware.

One comprehensive warning was published back in 1994, by Vesselin Bontchev, then a research associate at the Virus Test Center of the University of Hamburg. His article titled "Are 'Good' Computer Viruses Still a Bad Idea?" contained a handy taxonomy of reasons why good viruses are a bad idea, based on input from numerous AV experts. Andrew and I put these into a handy table in our paper:

Technical Reasons
Lack of Control
Spread cannot be controlled, unpredictable results
Recognition Difficulty
Hard to allow good viruses while denying bad
Resource Wasting
Unintended consequences (typified by the “Morris Worm”)
Bug Containment
Difficulty of fixing bugs in code once released
Compatibility Problems
May not run when needed, or cause damage when run
Effectiveness
Risks of self-replicating code over conventional alternatives
Ethical and Legal Reasons
Unauthorized Data Modification
Unauthorized system access or data changes illegal or immoral
Copyright and Ownership Problems
Could impair support or violate copyright of regular programs
Possible Misuse
Code could be used by persons will malicious intent
Responsibility
Sets a bad example for persons with inferior skills, morals
Psychological Reasons
Trust Problems
Potential to undermine user trust in systems
Negative Common Meaning
Anything called a virus is doomed to be deemed bad

We derived a new table from this, one that accounted for both self-replicating code and inserted code, such as trojans. Our table presented the "righteous malware" problem as a series of questions that should be answered before such code is deployed:

Control
Can you control the actions of the code in all environments it may infect?
Detection
Can you guarantee that the code will complete its mission before detection?
Attribution
Can you guarantee that the code is deniable or claimable, as needed?
Legality
Will the code be illegal in any jurisdictions in which it is deployed?
Morality
Will deployment of the code violate treaties, codes, and other international norms?
Misuse
Can you guarantee that none of the code, or its techniques, strategies, design principles will be copied by adversaries, competing interests, or criminals
Attrition
Can you guarantee that deployment of the code, including knowledge of the deployment, will have no harmful effects on the trust that your citizens place in its government and institutions including electronic commerce.

Clearly, the focus of our paper was the risks of deploying righteous malware, but many of those same risks attach to the mere development of righteous malware. Consider one of the arguments we addressed from the "righteous malware" camp: "Don't worry, because if anything goes wrong nobody will know it was us that wrote and/or released the malware". Here is our response from that 2014 paper:
This assertion reflects a common misunderstanding of the attribution problem, which is defined as the difficulty of accurately attributing actions in cyber space. While it can be extremely difficult to trace an instance of malware or a network penetration back to its origins with a high degree of certainty, that does not mean “nobody will know it was us.” There are people who know who did it, most notably those who did it. If the world has learned one thing from the actions of Edward Snowden in 2013, it is that secrets about activities in cyber space are very hard to keep, particularly at scale, and especially if they pertain to actions not universally accepted as righteous.
We now have, in the form of WannaCry, further and more direct proof that those "secrets about activities in cyber space", the ones that are "very hard to keep", include malicious code developed for "righteous" purposes. And to those folks who argued that it was okay for the government to sponsor the development of such code because it would always remain under government control I say this: you were wrong. Furthermore, you will forever remain wrong. There is no way that the creators of malware can ever guarantee control over their creations. And we would be well advised to conduct all of our cybersecurity activities with that in mind.

Tuesday, May 16, 2017

WannaCry ransomware: mayhem, money, scenarios, hypotheses, and implications

I think my ESET colleague Michael Aguilar had the best opening for an article on Friday's epic WannaCry ransomware outbreak:
"That escalated quickly! For those of you who did not read any news on Friday (or had your heads in the sand), you need to know that a massive tidal wave of malware just struck Planet Earth, creating gigantic waves in the information security sphere and even bigger waves for the victims." (We Live Security).
The English language version of the message WannaCry presents to victims 

And in the days since Friday you may have been caught up in the waves of breathless WannaCry reporting, tracking, analysis, advice (hashtag #wannacry). All of which was soon followed by the first rounds of finger-pointing and victim-blaming. I am hoping to write more about the latter shortly, but for now I want to speculate about "what the heck happened" as they say in Fargo-land.

Never speculate?

As a rule, cybersecurity professionals refuse to speculate in public about what the heck is going on in any given data breach of malicious code scenario. And by speculate I mean suggest explanations for the events at hand that go beyond the facts on hand. You especially don't want to point fingers at perpetrators unless you have an abundance of corroborating evidence (including some that is not digital, meaning "more than just code analysis").

All that said, it is helpful, and entirely justified IMPO, to consider, in the abstract, possible explanations for an exceptional course of events such as we have just witnessed with WannaCry.

1. It was just a money-making play: WannaCry presented itself as ransomware, the goal of which is to make money by charging people for the key to their data, which you have just encrypted without their permission. You can make a lot of money with ransomware, but it works best if you roll out your campaign in a controlled fashion, one that allows you to keep up with payments and key requests and customer service calls (yes, that is a thing; for example, many victims need help figuring out Bitcoin, the preferred method of ransom payment).

2. It was a narcissistic idiot play: Somebody figured they could create better ransomware than anyone else but didn't anticipate all of the implications of adding the NSA's eternalblue SMB exploit to a standard phishing based ransomware program (and yes, that link takes any person with an internet connection to the eternalblue code repository - which IMPO is crazy, but that's another article).

3. It was an intentional mayhem play: WannaCry spread faster than any sane cybercriminal would intentionally spread a ransomware campaign. So maybe the idea was to cause mayhem. When ESET holds its annual Cyber Boot Camp my colleague Cameron will award "mayhem points" to students who come up with a particularly imaginative way of causing chaos on the test network, but we conduct that camp under tightly controlled conditions in a secured facility. Nation states or their surrogates may feel inclined to conduct mayhem in the real world, as a distraction, to send a message, or even to undermine consumer confidence in technologies to which some countries do not yet have access, and so on.

4. It was a revenge play: What better way to show you are pissed off at the US government in general and the NSA in particular than to wreak global cyber-havoc with malware that is openly enabled by code developed by the NSA, leaked code that the NSA refuse to barter for.

So what are the implications?

Each of these four scenarios seems plausible to me, but of course I'm going to refuse to speculate as to whether one is more plausible than another. What I will assert is that pondering the scenarios may help investigators consider the full range of possibilities as they seek to identify the perpetrators.

For more background on WannaCry and what you should be doing to protect your IT systems against it, see Michael's original We Live Security article. There is also a follow up article on We Live Security and more to come, so be sure to sign up for the email alerts.

If you are interested in thinking more about what it means for government agencies to handle malware, consider this article and attached peer reviewed paper, presented at the NATO conference on cyber conflict.

Tuesday, February 21, 2017

Getting to know CISOs: Challenging assumptions about closing the cybersecurity skills gap

Importance of 12 attributes to being a successful information security professional (5 point scale)
What CISOs said was most important attribute for success

The cybersecurity skills gap is a serious problem for many countries, a problem that I have been studying for some time. As different public and private entities involved in workforce development wrestle with this problem they may find my research to be of some assistance. It might also be helpful for individuals considering a career in cybersecurity. For example, I took a hard look at what it takes for a person to be successful in cybersecurity roles, particularly the role of Chief Information Security Officer or CISO. 

Another survey ranking of attributes needed 
to be a successful security professional
My research and findings are published in this 68-page document: Getting to know CISOs: Challenging assumptions about closing the cybersecurity skills gap (PDF). This was the dissertation for my master's in security and risk management. The university examiners described it as "a meaningful and accessible, critically analysed report" and "a very pleasing piece of work".

I decided to make this pleasing piece of work available to the public via the Internet so that any value it may provide – to the efforts to close the cybersecurity skills gap and advance the security profession – can be realized sooner, rather than later.

Although the examiners said "elements of this dissertation are potentially publishable as journal articles and/or white papers" I wanted to get the document out there in its entirety, and quickly. Of course, I may pull from, or build on, this work in peer-reviewed articles and white papers down the road, and it has already informed several conference presentations that I have delivered.

(Update, 2020: one article that draws on this study is: Advancing Accurate and Objecitve Cybercrime Metrics, in the Journal of National Security Law and Policy.)

Note that the Getting to Know CISOs document is quite long: almost 25,000 words, with 171 references, and filling 68 pages including screenshots of the survey instrument that I used. The following abstract may help you decide if you want to download the whole thing.

Tuesday, January 24, 2017

The Amazon Echo Dot echo effect: Alexa and the accidental dollhouse orders

Earlier this month I was involved in a technology news story that went a little bit viral, at one point threatening to become a virtual virus, self-propagating across the airwaves. This chain of events was created by voice recognition technology which is now being installed in millions of homes around the world. I have written about the technology on We Live Security, a site to which I urge you to subscribe if you are into all things cybersecurity. This article is the back story, which some may find interesting.

The heart of this particular thing/story was a spoken phrase, a phrase which you should avoid speaking out loud if you are within hearing distance of an Amazon Echo device like the one on the right. The phrase is: "Alexa, order me a dollhouse."

When the morning TV news program on station CW6 in San Diego reported that a young girl had accidentally used her parents' Amazon account to purchase a very expensive dollhouse via Alexa, the news anchor Jim Patton said: "I love the little girl saying ‘Alexa order me a dollhouse.’” As soon as Jim said that, the phones at the TV station started ringing. Viewers were calling to complain that their Alexas had tried to order dollhouses. In other words, a whole lot of people had been awoken to the fact that the current generation of Alexa devices will take orders from anyone: they use voice recognition technology to understand what people say, but not to distinguish who is saying it.

Later that Thursday morning, CW6 called ESET, the US headquarters of which are in San Diego, and asked if I could comment on this phenomenon. I said yes because I was already doing research on digital devices with voice recognition including, oddly enough a doll called "My Friend Cayla". Reporter Carlos Correa and I chatted for a while and a number of my comments about Alexa-type devices, but not all, were reported on air that evening. That story, which was a story about a story about Alexa, was rapidly syndicated and picked up around the world. Within a few days, the logo cloud of media sites that were quoting me looked a bit like this:
For the first 24 hours I was not aware that the story was spreading. Then I got a ping on Twitter from Oludotun "Dotun" Adebayo at the BBC. Could he talk to me in the early hours of Monday, his time, late Sunday my time? At that point I felt compelled to dig a little deeper into Alexa, starting with the installation process. At about 11PM on Saturday night I ordered the Amazon Echo Dot you see above. It arrived at 10AM on Sunday morning.

By the time I spoke with Dotun it was clear to me just how easy it was for someone to 'accidentally' buy something with an Amazon Echo. The magic word is not dollhouse, it could be drone or hoverboard; the "magic" is Alexa, which triggers a response from these devices. In the default configuration, the state of the system if you simply take it out of the box and plug it in following the installation instructions, is a. linked to your Prime account, and b. prepared to place orders with a simple verbal confirmation (using your "1-Click" settings as default payment method and shipping address).

And to be clear, Alexa will offer to ship you products even if you are not talking about buying something. For example, suppose you say, "Alexa, what's the best hoverborard?" The response will be a recitation of the product listing for the top rated hoverboard currently offered for sale on Amazon, immediately followed by an offer to ship it to you. If you say no, Alexa will then describe another product and offer to ship that. You need to say something like "Alexa cancel" or Alexa stop" to terminate the conversation. It so happens that the dollhouse ordered by the young girl that sparked the story was the second offering, suggesting that she had refused the first offer to send her a dollhouse.

Where does the story go from here? Hopefully, all Echo owners are now familiar with the "microphone off" button that stops the device listening (see picture on right - probably worth clicking before you go out, especially if you tend to leave the TV or radio on). And I'm sure many folks have been changing the default settings, turning off automated ordering or protecting it with a PIN.

At some point Amazon may enable two Echo features that could further reduce problems. First, allow owners of the devices to set a custom trigger word. At least that would enable you to talk about Alexa without waking her up. Second, but harder, would be to limit the voices to which Alexa responds, namely authorized users only. Of course, all of these things would add "friction" to the customer experience, which Amazon may be loath to do.

One question remains in my mind: Did Amazon ever consider that TV broadcasts would trigger the device? The CW6 experience was random, an accident. But if you intentionally broadcast the right words with the right timing you could trigger a mass ordering of products. And while Amazon has said it will accept returns of all 'accidental' orders, you can't use your Echo to cancel purchases. You have to go to the Amazon website or mobile app. Imagine a malicious broadcast that ordered expensive baby carriages, not the easiest things to return to sender. Does Amazon have an algorithm to detect that? Would some percentage of the orders be undetected until they turned up on doorsteps? How much would that cost in terms of dollars and good will?

And of course, buying things is not the only thing these devices can do. They can control thermostats and door locks and all manner of Internet of Things (IoT) devices. Pair that with the malicious broadcast scenario and you have some frightening possibilities. (I have been writing and talking about abuse of the IoT at We Live Security and other places.)

Tuesday, October 25, 2016

A quarter of a century of computer and network security research and writing


Twenty-five years ago this month McGraw-Hill published a book I wrote about computer and network security. And the first thing I tell people about this book is that I did not put the word "complete" in the title! That was the publisher's decision. Because if there was one thing that I learned in the three years during which I researched the book it was this: there will never be a "complete book" of security.

The second thing I tell people is that The Stephen Cobb Complete Book of PC and LAN Security was not a big seller. Indeed, it was a complete flop compared to some of the other books I wrote in the late 1980s and early 1990s. My best seller...

Thursday, October 13, 2016

More about the cybersecurity skills gap

[Update 2/25/17: now available, 68-page dissertation/report on the cybersecurity skills gap and the makings of effective CISOs.]

In October of 2016, I presented a paper titled "Mind This Gap: Criminal Hacking and the Global Cybersecurity Skills Shortage, a Critical Analysis." The venue was Virus Bulletin, a premier event on the global cybersecurity calendar that is particularly popular among malware researchers (for the story of how "VB" achieved this status, see below).

Papers and Slides

When your proposed paper is accepted by the VB review committee, you first have to submit the paper, then deliver the high points in a 30 minute presentation at the conference, which takes place several months later. In this case, the elapsed time between paper and presentation was very helpful because it allowed me to incorporate some of the findings from my postgraduate research into my conference slides, which are available for download here: Mind This Gap.

The VB conference papers are published in an impressive 350 page printed volume. However, the conference organizers have kindly given me permission to share my paper - which is only 8 pages - here on the blog:
As you may know, I've been studying various aspects of the cybersecurity skills gap this year, I put together a short white paper about the size of the gap:
Later this year I hope to publish the full results of my postgraduate research which looks at some of the assumptions behind efforts to close cybersecurity skills gap.

A note about Virus Bulletin

Monday, September 26, 2016

Email account breached? There's a website for that


Recent news that half a billion Yahoo accounts have been compromised has prompted me to again tell friends about a great website for exploring the effect of security breaches on your online accounts. The site is called: haveibeenpwned and I encourage you to explore it.

Friday, September 02, 2016

Surveys galore: cybercrime wave, government prodding, and more

One of the biggest problems with fighting cybercrime is knowing how much of it there is. If you or your organization have been a victim of cybercrime - and a recent study said that 80% of organizations have* - then you know there is too much of it. Indeed, another recent survey suggests that 69% of US adults agree their country is experiencing a wave of cybercrime.** This state of affairs has many people thinking that the government is not doing enough to fight cybercrime. How many? About 63% in a recent survey.***

And right there, in that short paragraph, you see how important it is to measure the problems you are trying to solve, whether it's "how big is that gap in the planking that's letting water into the boat?" or "to how big is that gap between the number of people we need to fight cybercrime and the current supply?" That latter question has been preoccupying me a lot this year and it's a tough one to answer, but that doesn't mean we shouldn't try. After all, this gap is causing serious problems for many organizations. According to a CSIS/Intel-McAfee survey more than 70% of enterprises had suffered losses that they attributed to lack of skilled security professionals.

Friday, July 15, 2016

Sizing the Cybersecurity Skills Gap: A white paper

Whether you're in charge of the security of your organization’s data and systems, or working in IT security, or looking for a career, it is hard to ignore headlines like this: “One Million Cybersecurity Job Openings in 2016.” The term “cybersecurity skills gap” is now being used as shorthand for the following assertion: there are not enough people with the skills required to meet the cybersecurity needs of organizations. (You will also see cyber skills gap as a short form of cybersecurity skills gap, but some people also use cyber skills gap for the broader lack of people with skills like coding, networking, etc. so I often use cybersecurity skills to avoid ambiguity)

But is this gap real? Is the million missing people claim true? The security industry has a shaky record when it comes to numbers, something I talked about at Virus Bulletin last year in the context of cybercrime (see paper and video of session here). At this year's Virus Bulletin in Denver I will be presenting a paper about efforts to address the cybersecurity skills gap. I am also studying aspects of the problem for my MSc dissertation (see CISO Survey).

In the midst of all this work I accumulated some observations about the size of the cyber skills gap and wrote them up in my spare time, in the form of a paper titled Sizing the Cyber Skills Gap. I hope folks find this useful.
.

Monday, July 11, 2016

The Effective CISO Survey: A call for participation


SURVEY NOW CLOSED. PLEASE CHECK BACK IN OCTOBER
FOR A REPORT ON THE RESULTS


Are you a CISO? Do you work for or with a CISO?

If you answered yes to any of those questions, please consider taking the 12 minute survey I am conducting for my MSc in Security and Risk Management at the University of Leicester in England. Your participation would be greatly appreciated and you can get an early copy of the resulting report. To get right to it, the survey starts here: http://cisosurvey.org.
Why am I doing this? To find answers to this question: What do you need to be an effective Chief Information Security Officer? This is the subject of my dissertation, a piece of original research about 15,000 words in length, conducted in Leicester's Criminology Department, pictured below (it may look like Hogwarts, but it ranks among the world's top universities).

University of Leicester, Department of Criminology
(I kid you not, I took this myself on my first visit)
The question about what it takes be an effective CISO is not merely academic, it is also of immediate practical importance. Right now, under-staffed crews of information security folks are struggling to hold the line against criminal activity in cyberspace. And there are not enough people in the education and employment pipeline to fill all of the open defensive positions. 

This situation is referred to as the "cyber skills gap" and later this month I will be releasing a white paper in which I examine the claim that there are one million unfilled cybersecurity positions globally (there will be a link on this page). In the US alone the gap could be as big as 200,000. This situation, which has been building for some time, has caused many countries to begin pouring money into cybersecurity education and workforce training. However, some of these funds may be wasted because there has been very little research into what a cybersecurity career is like. What does success look like? What is job satisfaction like? What personality traits are a good fit for cyber roles, and so on. On the bright side, by studying these questions we may find ways to close the skills gap and make cyberspace a safer place (hmm, I wonder if optimism is an important trait).

I decided to devote my dissertation to one small part of this cyber research gap: what it takes to do the top job, to be the person who manages information security for the organization: the CISO. My research led me to create the Effective CISO survey, which is carried out through SurveyMonkey but accessed via a website I created at cisosurvey.org, all of which has passed the university's ethics review process.

If you want further verification, or have any questions about this project at all, please email my university email account which is stcnn at student.le.ac.uk, where nn = is a two digit number, the one you get when you multiply four by itself. The address is also displayed beneath the university logo at the top of the page.

So, if this survey subject is of interest to you, and you would like to get an early look at my results, and you have about 12 minutes, please consider participating at cisosurvey.org.

THANK YOU!

Thursday, June 16, 2016

20 years of CISSP, ELOFANTs and other cybersecurity acronyms

This article is about some things I don't know, and some other things that you might not know.

For example, I don't know who was the first person to pass the exam to become a Certified Information System Security Professional or CISSP (pronounced sisp). The CISSP website says the certification program was launched in 1994.

(That means if someone tells you they've been a CISSP for more than 25 years, and the current year is 2016, then they may be fibbing.)

I became a CISSP in May of 1996, something that I wrote about recently in an article on We Live Security: What the CISSP? 20 years as a Certified Information Systems Security Professional. The CISSP qualification has served me very well over the last 20 years, so I felt obliged to address some of the reasons some people criticize it, and did so in that article. Those criticisms not withstanding, I would encourage anyone who meets the experience requirements for the CISSP to apply for, pass the test for, and then maintain CISSP certification (you need to earn continuing education credits every year to stay certified).

The place to start learning about CISSP is the website of the issuing body, the International Information Systems Security Certification Consortium. This non-profit organization is known as (ISC)2 which is pronounced “I-S-C-squared” because the name contains two each of those three letters, which is cute but sometimes a pain for typographers and search engines.

Another cybersecurity acronym that's been on my mind lately is CISO, as in Chief Information Security Officer, a title often used to designate the person most directly responsible for the organization's information system security. I am studying CISOs as part of my studies at the University of Leicester. I will soon be launching a survey on the subject (that I will link here when it goes online).

Of course, a lot of CISO's have certifications from (ISC)2 and that reminds me of something else I don't know, the answer to an interesting question, one that is not asked during the six hour CISSP exam: Is (ISC)2 an acronym?

Seriously, I don't know the answer, but speaking of acronyms and unknowns, I coined an acronym for an unknown a few weeks ago: ELOFANT. Those letters stand for Employee Left Or Fired, Access Not Terminated. (Those letters also account for the image at the top of the article.) I wrote about ELOFANTs here.

The point of coining this acronym was to draw attention to the fact that one of the biggest risks to company networks and data are people who have departed the organization but still have access to some of all of its data: ELOFANTs. Here are a few data points to back that up:
ELOFANTs are not a new problem, but these days they may be a bigger problem than in the past thanks to the proliferation of apps that companies use, particularly cloud-based sharing and collaboration apps, credentials for which might not be centrally tracked like corporate network access usually is. So let me leave you with a couple of questions to which your organization's CISOs should know the answer: how do you determine what access to the organization's data a departing employee has, and how do you revoke it?
.

Friday, May 13, 2016

Jackware: coming soon to a car or truck near you?

Jackware - when your car is taken off you by software, illustrated by vintage photo of a car taking off
As 2016 rolls on, look for headlines declaring it to be "The Year of Ransomware!" 

But what kind of year will 2017 be? Will it be "The Year of DDos" or some other form of "cyber-badness" (kudos to my ESET colleague Cameron Camp for coining that term). Right now I'm worried that, as the years roll on we could see "The Year of Jackware" making headlines.

What is jackware?

Jackware is malicious software that seeks to take control of a device, the primary purpose of which is not data processing or communications, for example: your car. Think of jackware as a specialized form of ransomware. With ransomware, the malicious code encrypts your documents and demands a ransom to unlock them. The goal of jackware would be to lock up a car or other piece of equipment until you pay up. Fortunately, and I stress this: jackware is currently, pretty much, as far as I know, theoretical, not yet "in the wild".

Update: Jackware in the news...

Unfortunately, based on past form, I don't have much faith in the world's ability to stop jackware being developed and deployed. So far the world has failed abysmally when it comes to cybercrime deterrence. There has been a collective international failure to head off the establishment of a thriving criminal infrastructure in cyberspace that now threatens every innovation in digital technology you can think of, from telemedicine to drones to big data to self-driving cars.

Consider where we are right now, mid-May, 2016. Ransomware is running rampant. Hundreds of thousands of people have already paid money to criminals to get back the use of their own files or devices. And all the signs are that ransomware will continue to grow in scale and scope. Early ransomware variants failed to encrypt shadow copies and connected backup drives, so some victims could recover fairly easily. Now we're seeing ransomware that encrypts or deletes shadow copies and hunts down connected backup drives to encrypt them as well.

At first, criminals deploying ransomware relied on victims clicking links in emails, opening attachments, or visiting booby-trapped websites. Now we're also seeing bad guys using hacking techniques like SQL injection to get into a targeted organization's network, then strategically deploy the ransomware, all the way to servers (many of which aren't running anti-malware).

The growing impact of ransomware would also seem to be reflected in people's reading habits. Back in 2013, one of my colleagues at ESET, Lysa Myers wrote an article about dealing with the ransomware scourge. For the first few weeks it got 600-700 views a week. Then things went quiet. Now it is clocking 4,000-5,000 hits a week and the war stories from victims keep rolling in.

The point at which automotive malware becomes serious jackware will be the conjunction of self-driving cars and vehicle-to-vehicle networks
But how do we get from ransomware to jackware? Well, it certainly seems like a logical progression. When I told Canadian automotive journalist David Booth about ransomware on laptops and servers, I could see him mentally write the headline: Ransomware is the future of car theft. I knew David would see where this could be headed. He's written about car hacking before, going deeper into the subject than most of the automotive press.

The more I think about this technology myself, the more I think that the point at which automotive malware becomes serious jackware, and seriously dangerous, will be the conjunction of self-driving cars and vehicle-to-vehicle networks. Want a nightmare scenario? You're in a self-driving car. There's a drive-by infection, silent but effective. Suddenly the doors are locked with you inside. You're being driven to a destination not of your choosing. A voice comes on the in-car audio and calmly informs you of how many Bitcoins it's going to take to get you out of this mess.

Why give the bad guys ideas?

Let's be clear, I didn't coin the term jackware to cause alarm. There are many ways in which automobile companies could prevent this nightmare scenario. And I certainly didn't write this article to give the bad guys ideas for new crimes. The reality is that they are quite capable of thinking up something like this for themselves.

Can I be sure there's not some criminal out there who's going to read this and go tell his felonious friends? No, but if that happens it's quite probable that his friends will sneer at him because they know someone who's already done a feasibility study of something like jackware-like (yes, the cybercrime underworld does operate a lot like a fully evolved corporate organism). We are not seeing jackware yet because the time's not right. After all, there's no need to switch from plain old ransomware as long as people keep paying up.

Right now, automotive jackware is still under "future projects" on the cybercrime whiteboards and prison napkins. Technically it's still a stretch today, and tomorrow's cars could be even better protected, particularly if FCA has learned from the Jeep hack and VW has learned from the emissions test cheating scandal and GM's bug bounty program gets a chance to work.

Unfortunately, there's this haunting refrain I can't quite get out of my head, something about "when will they ever learn..."

Monday, May 09, 2016

White paper on US data privacy law and legislation

Recently I put together a 15 page white paper titled Data privacy and data protection: US law and legislation. Among the 80 or so references at the end of the paper you will find links to a lot of the federal privacy laws, and some of the articles I cited.

Back in 2002 when I published a book on
data privacy, I asked the cat to "look shy"
and
she struck this pose (honest!)
I figured this would be a handy resource for folks looking to learn more about how data privacy works in the US. Of course, some would say data privacy doesn't work in the US, and the white paper is written with that opinion in mind. Frankly, the whole subject is pretty complex and in writing this paper I found out I had been wrong, or at least, not quite right, about quite a few things.

Knowing how data privacy protection has evolved in the US so far should help inform its further progression. Clearly, data protection will continue to evolve in the EU and US with the arrival of the General Data Protection Regulation (GDPR), also known as the European Data Protection Regulation (the GDPR is not discussed in the white paper – the subject probably merits one of its own – I have been clipping news on GDPR here and tweeting it here)

For more on the white paper, which was made possible by ESET, visit the We Live Security website, and be sure to sign up for regular news on all manner of data privacy and cybersecurity topics by email.

If a white paper is too much and you're just getting started in your data privacy reading, here are some good places to start:


Friday, March 11, 2016

Infowar and Cybersecurity: Pitfalls, history, language, and lessons still being learned

I recently registered to attend a very special event in the cybersecurity calendar: InfoWarCon. The organizers of this unique gathering ask all participants to write a short blurb about what they bring to the proceedings. You can read what I wrote later on in this post, but first, some background.

The Information Warfare Conference

An institution created by my good friend Winn Schwartau, InfoWarCon has been around from more than 20 years. Even if you haven't heard of Winn, I bet you've heard the phrase: "Electronic Pearl Harbor". Winn was the first person to use that term, as recorded in his testimony to Congress about the offensive use and abuse of information technology in 1991. That was five years before CIA Director John Deutch made national headlines using the term, also in congressional testimony (you may recall President Clinton issuing a presidential pardon to Deutch after he was found to have kept classified material on unsecured home computers).

The first InfoWarCon I attended was the one held at the Stouffer Hotel in Arlington, Virginia, in September of 1995. In those days, Chey and I were both working for the precursor to ICSA Labs and TruSecure, then known as NCSA, a sponsor of InfoWarCon 95. The agenda for that event makes very interesting reading. It addressed a raft of issues that are still red hot today, from personal privacy to open source intel, from the ethics of hacking to military "uses" of information technology in conflicts.

Winn was passionate that there should be open and informed debate about such things because he could see that the "information society" would need to come to grips with their implications. Bear in mind that a lot of the darker aspects of information technology were still being eased out of the shadows in the 1990s. I remember naively phoning GHCQ in 1990, back when I was writing my first computer security book, and asking for information about TEMPEST. The response? "Never heard of it; and what did you say your name was?" When I first met Winn he was presenting a session on a couple of other acronyms, EMP bombs and HERF guns. That was at Virus Bulletin 1994, one of the longest running international IT security conferences (my session was a lot less interesting, something about Windows NT as I recall).

The InfoWarCon speaker lineup in 1995 included a British Major General, several senior French, Swedish, and US military folks, Dr. Mich Kabay - chief architect of one of America's first graduate level information assurance programs, and Scott Charney, now Corporate Vice President for Microsoft's Trustworthy Computing. Many of those connections remain active. For example, the Swedish Defence University is involved in this year's InfoWarCon, via its Center for Asymmetric Threat Studies (CATS). Recent InfoWarCons have eschewed the earlier large-scale public conference format in favor of a more intimate event - private venue, limited attendance, no media - more conducive to frank exchanges of perspectives and opinions.

For Chey and I, the trip to InfoWarCon16 is personal as well as professional - after all, we have known the Schwartaus for more than two decades, somehow managing to meet up in multiple locations over the years, from DC to Florida, Las Vegas to Vancouver, not to mention Moscow. So when I got to the registration page for InfoWarCon16, which asks all prospective attendees and invitees to submit a short “What I Bring to InfowarCon” blurb, my first thought was "I don't need no stinking blurb!" But that soon passed as I relished an excuse to convey something of my background in a new, and hopefully interesting, way. Here is what I wrote...

A Student of Information Technology Pitfalls

Mining coal in the Midlands, 1944 © IWM
I was born in 1952, in the English county of Warwickshire, in a small terraced house heated by fireplaces that burned coal. That coal was mined from one of 20 pits under our county, some of which were more than a century old by then. Between 1850 and 1990, pitfalls in mines in the Midlands killed hundreds of men as they toiled to fuel the industrial revolution. Across Britain during that time period coal pits claimed over a hundred and fifty thousand miners, but theirs were not the only lives taken by fossil-fueled industrial technology. Consider this: a few months after I was born, 12,000 Londoners died from a single air pollution incident, of which burning coal was a primary cause (the Great Smog of 52).

And so it was that, many years before computers came into my life, I was well aware technology brings pitfalls as well as benefits. Like many of the swords displayed in Warwick castle, originally built by William the Conqueror in the eleventh century, technology is double-edged. This is certainly true of information technology. It can be good for growth, good for defense, but also tempting for offense.

Since I started researching my first computer security book in the late 1980s I have thought long and hard about such things, sometimes in ways that others have not. I have listened closely to the language invented to articulate the uses and abuses of this technology. For example, in 2014, I presented a paper at CyCon titled “Malware is called malicious for a reason: the risks of weaponizing code” in which I introduced the term ‘righteous malware’ (IEEE CFP1426N-PRT).

 In 2015, I analyzed the problem of measuring the scale and impact of cybercrime in the peer-reviewed Virus Bulletin paper: “Sizing cybercrime: incidents and accidents, hints and allegations”. The serious shortcomings of both public and private sector efforts to address this issue were articulated and documented in detail. I am currently doing post-graduate research at the University of Leicester seeking to identify key traits of effective cybersecurity professionals. But more importantly, for the past 25 years I have engaged myself as much as possible - resources and life events permitting - in the ongoing conversation about how best to reap the benefits of information technology without suffering from what have been called its downsides, its pitfalls.

Speaking of which, it is relevant to note, in the context of InfoWarCon, that the word pitfall did not originate in coal mines, but on the battlefield. The Oxford English Dictionary identifies 1325 as the first year it was used in written English. The meaning? “Unfavourable terrain in which an army may be surrounded and captured.” To me, that doesn't sound a whole lot different from some parts of cyberspace.

Tuesday, February 02, 2016

Some cybersecurity-related videos

Here are some videos of projects I have been involved with over the past 12 months or so, starting with the Cyber Boot Camp, held in June of 2015. ESET and Securing Our eCity hosted the top eight teams in the San Diego Mayors' Cyber Cup competition for five days of hands on cybersecurity education on the campus of National University. My colleague, Cameron Camp led the "war room" exercises. So great to see so many young women involved!



Cybersecurity, cybercrime, and the need for more women and minorities in technology leadership were the topic of this TEDx talk I delivered in San Diego last October.



In December of last year I spoke to a meeting of the Sage Group, an association for entrepreneurs and executives in San Diego. While I covered some of the same topics as the TEDx talk, I also discuss ESET and the origins of my interest in cybersecurity.



Several well-attended webinars were recorded over the last year or so.