Saturday, September 09, 2017

Steps to take now Equifax breach has affected almost half of adults in US

The Equifax security breach, in which "identity theft enabling data" was stolen from a company that sells identity theft protection products, may well surpass the Target breach as one of the most impactful ever, at least from a consumer perspective.

As Lysa Myers, my ESET colleague, has noted this breach appears to have occurred between mid-May and July. It was discovered by Equifax on July 29 and the scale is staggering: 143 million people affected, almost half of all adults in the US!

For those wondering how to identify or mitigate problems caused by this breach, Lysa has some good advice. Unfortunately, the response from Equifax has not been exemplary and there are concerns that it might be trying to restrict consumer rights of redress as part of its "help" process (see this Atlantic article and the update below).

For those wondering how such a thing could happen, I suggest "stay tuned" to your favorite cybersecurity news feeds. We have some information already (Equifax may have fallen behind in applying security updates to its Internet-facing Web applications). However, I am sure there will be more details to come.

In the meantime, I leave you with this weird fact: A share of Equifax (EFX) stock was worth about $143 before the breach, which affected 143 million people. It dropped dramatically after news of the breach broke, closing on Friday at $123. That's a drop of more than 13%. Yet all the indications are that preventing the breach sounds could have been as easy as, you guessed it: 1-2-3.

Update: Thanks to Brian Krebs for flagging the change Equifax that made to its breach alert page. This makes it clear that "the arbitration clause and class action waiver included in the Equifax and TrustedID Premier terms of use does not apply to this cybersecurity incident."

I am providing the address of the breach alert page below, but stress that you use it at your own risk. The fact that I feel compelled to say that is a reflection of how badly, in my opinion, Equifax has been handling the breach response so far:

Sunday, July 09, 2017

US-Russia cybersecurity talks: right script, wrong actors?

Should the US and Russia hold talks on cybersecurity? A lot of people are shouting "No!" and I think I understand why, but in my opinion that's the wrong answer, albeit for the right reasons. Just consider these two propositions:

A. The US and Russia should, bilaterally and globally, seek ways to deter cybercrime and reduce cyberconflict.

B. President Trump and President Putin should, bilaterally and globally, seek ways to deter cybercrime and reduce cyber-conflict.

I would argue that A is not only a good idea but has an aura of historical inevitability, while B is a very disquieting prospect. Why? Because I don't think the Trump administration understands how diplomatic negotiation works, not to mention the fact that Trump himself has openly disparaged many of the very people whose expertise and cooperation is needed to protect US interests during such negotiations.

In other words, I believe the US and Russia, and every other country, must work together to deter cybercrime and reduce cyber-conflict. That is the right script. That is the direction the world will take, if not now, then at some point in the future. But Trump and Putin are the wrong actors for this script; both lack the levels of credibility and legitimacy required to make meaningful progress.

"Good luck with that"

Of course, I am accustomed to hearing "Good luck with that" and "Ain't gonna happen" when I say to people "international cooperation and global treaties are the only way to make a serious dent in cybercrime and cyberconflict." But history tells me I am right, even if doesn't tell me how old I will be when that eventually proves to be true.

Consider the 27 treaties listed on the website of the Arms Control Association. They all started with someone putting forward objectives to which a lot of people said "good luck with that." And they all took a long time to realize their objectives. Some are still unattained. But I don't think anyone believes the world would be a better place without these treaties (I could be wrong, so tweet me @zcobb if you disagree).

To be clear, I am not equating nuclear and chemical weapons with cyber-weapons. The horrific effects of nuclear and chemical weapons are categorically different from the effects we have seen so far from malicious code. But weaponized code has the potential to cause massive, country-wide disruption, and be an enabler of, or catalyst for, even greater impacts.

While agreements to restrict the use of weapons technology always start out as a long shot, so to speak, there are always ground for hope. My confidence in this assertion is based on my own experience. I was just a young boy when, in November 1957, an article by the British writer J. B. Priestley titled "Britain and the Nuclear Bombs" made the case for unilateral nuclear disarmament.

Priestley wrote: "now that Britain has told the world she has the H-bomb she should announce as early as possible that she has done with it, that she proposes to reject, in all circumstances, nuclear warfare."

Bertrand Russell leads anti-nuclear march in London, Feb 1961Despite many voices declaiming "Good luck with that" the article helped inspire concerned individuals to start the Campaign for Nuclear Disarmament (CND): "an organization that advocates unilateral nuclear disarmament by the United Kingdom, international nuclear disarmament and tighter international arms regulation through agreements such as the Nuclear Non-Proliferation Treaty." (Wikipedia)

In less than six months, CND had joined with another pacifist group in a "ban the bomb" march. This was not an afternoon walk in the park protest, this was a serious, four-day, 52 mile march from London to the Atomic Weapons Research Establishment at Aldermaston. This became an annual protest joined by tens of thousands of people carrying the peace sign, a symbol that was created for the CND movement (in 1961, my mum and I joined about 150,000 other people for a day's worth of marching).

Why bother?

Did the CND and the Aldermaston March make a difference? I don't know. I do know that after the Cuban missile crisis in 1962, political and diplomatic efforts to constrain the spread and development of nuclear weapons accelerated. In 1963, a treaty was signed by the US, the Soviet Union, and the UK (known as the Partial Test Ban Treaty (PTBT) its full name is the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water).

Sure, the PTBT was not a comprehensive treaty and it only banned testing, not development or production; but international agreements have progressed dramatically since then. Sure, there are still nuclear and chemical weapons out there; but there is an established international regime for limiting, monitoring, and constraining their development and deployment. Personally, I am glad that the early proponents of a treaty-based response to those weapons were not discouraged by people who were - quite understandably at the time - skeptical that any progress could ever be made

Today, it seems clear to me that addressing the problems of international cybercrime, cyberconflict, and government deployment of weaponized code requires international negotiation, even between governments who have profoundly different politics. After all, the capitalist imperialists of the US negotiated with the godless communists of the Soviet Union to reach numerous weaponry-related agreements, even before the Cold War ended.

The problem right now is that the US administration currently lacks the diplomatic chops for this type of negotiation, precisely because it is headed by someone who does not understand diplomacy. The president of the United States needs to understand this:
you can publicly condemn Russia for meddling in our elections while at the same time negotiate norms for future behavior, but doing one without the other will do the world no good at all.
In the hopes that there are some folks within the current administration who get this, I have provided - in my role as an eternal optimist - a handy starter list of reading materials. A nine-point bullet list of the basic argument should follow shortly (suitable for presidential briefings).

Resources, opinion, and discussion:

Saturday, May 27, 2017

Malware prophecy realized: WannaCry, NSA EternalBlue, CIA Athena, and more

You probably noticed massive news coverage of the recent outbreak of malicious code called WannaCryptor, WannaCry, Wcry, and other variations on that theme. In fact, WannaCry itself was a variation on a theme, the ransomware theme. WannaCry made so much noise because it added a powerful worm capability to the basic theme of secretly encrypting of your files and holding them for ransom. Plausible estimates of the cost of this malware outbreak to organizations and individuals range from $1 billion to $4 billion.

And all of which was made possible by something called the EternalBlue SMB exploit, computer code developed at the expense of US taxpayers, by the National Security Agency (NSA). Now that copies of this malicious code have been delivered to hundreds of thousands of information system operators in more than 100 countries around the world, it might be wise to ask: "how did this happen?"

How did this happen?

Unfortunately, I am not privy to any of the technical details about how this happened beyond those already published by my colleagues in the cybersecurity profession (there's a good collection of information on We Live Security, a site maintained by my employer, ESET). However, in practical terms I do know how this happened, and it goes like this:
  1. The NSA helps defend the US by gathering sensitive information. One way to do that is to install software on computers without the knowledge or permission of their owners. 
  2. Installing software on computers without the knowledge or permission of their owners has always been problematic, not least because it can have unexpected consequences as well as serve numerous criminal purposes, like stealing or destroying or ransoming information.
  3. Back in the 1980s there were numerous attempts to create self-replicating programs (computer worms and viruses) that inserted themselves on multiple computers without permission. Many of these caused damage and disruption even though that was not the intent of their designers.
  4. Programs designed to help computer owners block unauthorized code were soon developed. These programs were generically referred to as antivirus software although unauthorized code was eventually dubbed malware, short for malicious software.
  5. The term malware reflects the overwhelming consensus among people who have spent time trying to keep unauthorized code off systems that "the good virus" does not exist. In other words, unauthorized code has no redeeming qualities and all system owners have a right to protect against it.
  6. Despite this consensus among experts, which had grown even stronger in recent years due to the industrial scale at which malware is now exploited by criminals, the NSA persevered with its secret efforts to install software on computers without the knowledge or permission of their owners. 
  7. Because the folks developing such code thought of it as a good thing, the term "righteous malware" was coined (definition: software deployed with intent to perform an unauthorized process that will impact the confidentiality, integrity, or availability of an information system to the advantage of a party to a conflict or supporter of a cause).
  8. Eventually, folks who had warned that righteous malware could not be kept secret forever were proven correct: a whole lot it was leaked to the public, including EternalBlue.
  9. Criminals were quick to employ the "leaked" NSA code to increase the speed at which their malicious code spread, for example using EternalBlue to help deliver cryptocurrency mining malware as well as ransomware.
  10. Currently there are numerous other potentially dangerous taxpayer-funded malicious code exploits in the hands of US government agencies, including the CIA (for example, its Athena malware is capable of hijacking all versions of the Microsoft Windows operating system, from XP to Windows 10).
So that's how US government funded malware ends up messing up computers all around the world. There's nothing magical or mysterious about it, just a series of chancy decisions that were consciously made in spite of warnings that this could be the outcome.

Warning signs

One such warning was the paper about "righteous malware" that I presented to the 6th International Conference on Cyber Conflict (CyCon) organized by the NATO Cooperative Cyber Defence Center of Excellence or CCDCoE in Estonia. You can download the paper here. My co-author on the paper was Andrew Lee, CEO of ESET North America, and we have both spent time in the trenches fighting malicious code. We were well aware that antivirus researchers had made repeated public warnings about the risks of creating and deploying "good" malware.

One comprehensive warning was published back in 1994, by Vesselin Bontchev, then a research associate at the Virus Test Center of the University of Hamburg. His article titled "Are 'Good' Computer Viruses Still a Bad Idea?" contained a handy taxonomy of reasons why good viruses are a bad idea, based on input from numerous AV experts. Andrew and I put these into a handy table in our paper:

Technical Reasons
Lack of Control
Spread cannot be controlled, unpredictable results
Recognition Difficulty
Hard to allow good viruses while denying bad
Resource Wasting
Unintended consequences (typified by the “Morris Worm”)
Bug Containment
Difficulty of fixing bugs in code once released
Compatibility Problems
May not run when needed, or cause damage when run
Risks of self-replicating code over conventional alternatives
Ethical and Legal Reasons
Unauthorized Data Modification
Unauthorized system access or data changes illegal or immoral
Copyright and Ownership Problems
Could impair support or violate copyright of regular programs
Possible Misuse
Code could be used by persons will malicious intent
Sets a bad example for persons with inferior skills, morals
Psychological Reasons
Trust Problems
Potential to undermine user trust in systems
Negative Common Meaning
Anything called a virus is doomed to be deemed bad

We derived a new table from this, one that accounted for both self-replicating code and inserted code, such as trojans. Our table presented the "righteous malware" problem as a series of questions that should be answered before such code is deployed:

Can you control the actions of the code in all environments it may infect?
Can you guarantee that the code will complete its mission before detection?
Can you guarantee that the code is deniable or claimable, as needed?
Will the code be illegal in any jurisdictions in which it is deployed?
Will deployment of the code violate treaties, codes, and other international norms?
Can you guarantee that none of the code, or its techniques, strategies, design principles will be copied by adversaries, competing interests, or criminals
Can you guarantee that deployment of the code, including knowledge of the deployment, will have no harmful effects on the trust that your citizens place in its government and institutions including electronic commerce.

Clearly, the focus of our paper was the risks of deploying righteous malware, but many of those same risks attach to the mere development of righteous malware. Consider one of the arguments we addressed from the "righteous malware" camp: "Don't worry, because if anything goes wrong nobody will know it was us that wrote and/or released the malware". Here is our response from that 2014 paper:
This assertion reflects a common misunderstanding of the attribution problem, which is defined as the difficulty of accurately attributing actions in cyber space. While it can be extremely difficult to trace an instance of malware or a network penetration back to its origins with a high degree of certainty, that does not mean “nobody will know it was us.” There are people who know who did it, most notably those who did it. If the world has learned one thing from the actions of Edward Snowden in 2013, it is that secrets about activities in cyber space are very hard to keep, particularly at scale, and especially if they pertain to actions not universally accepted as righteous.
We now have, in the form of WannaCry, further and more direct proof that those "secrets about activities in cyber space", the ones that are "very hard to keep", include malicious code developed for "righteous" purposes. And to those folks who argued that it was okay for the government to sponsor the development of such code because it would always remain under government control I say this: you were wrong. Furthermore, you will forever remain wrong. There is no way that the creators of malware can ever guarantee control over their creations. And we would be well advised to conduct all of our cybersecurity activities with that in mind.

Monday, May 15, 2017

WannaCry ransomware: mayhem, money, scenarios, hypotheses, and implications

I think my ESET colleague Michael Aguilar had the best opening for an article on Friday's epic WannaCry ransomware outbreak:
"That escalated quickly! For those of you who did not read any news on Friday (or had your heads in the sand), you need to know that a massive tidal wave of malware just struck Planet Earth, creating gigantic waves in the information security sphere and even bigger waves for the victims." (We Live Security).
The English language version of the message WannaCry presents to victims 

And in the days since Friday you may have been caught up in the waves of breathless WannaCry reporting, tracking, analysis, advice (hashtag #wannacry). All of which was soon followed by the first rounds of finger-pointing and victim-blaming. I am hoping to write more about the latter shortly, but for now I want to speculate about "what the heck happened" as they say in Fargo-land.

Never speculate?

As a rule, cybersecurity professionals refuse to speculate in public about what the heck is going on in any given data breach of malicious code scenario. And by speculate I mean suggest explanations for the events at hand that go beyond the facts on hand. You especially don't want to point fingers at perpetrators unless you have an abundance of corroborating evidence (including some that is not digital, meaning "more than just code analysis").

All that said, it is helpful, and entirely justified IMPO, to consider, in the abstract, possible explanations for an exceptional course of events such as we have just witnessed with WannaCry.

1. It was just a money-making play: WannaCry presented itself as ransomware, the goal of which is to make money by charging people for the key to their data, which you have just encrypted without their permission. You can make a lot of money with ransomware, but it works best if you roll out your campaign in a controlled fashion, one that allows you to keep up with payments and key requests and customer service calls (yes, that is a thing; for example, many victims need help figuring out Bitcoin, the preferred method of ransom payment).

2. It was a narcissistic idiot play: Somebody figured they could create better ransomware than anyone else but didn't anticipate all of the implications of adding the NSA's eternalblue SMB exploit to a standard phishing based ransomware program (and yes, that link takes any person with an internet connection to the eternalblue code repository - which IMPO is crazy, but that's another article).

3. It was an intentional mayhem play: WannaCry spread faster than any sane cybercriminal would intentionally spread a ransomware campaign. So maybe the idea was to cause mayhem. When ESET holds its annual Cyber Boot Camp my colleague Cameron will award "mayhem points" to students who come up with a particularly imaginative way of causing chaos on the test network, but we conduct that camp under tightly controlled conditions in a secured facility. Nation states or their surrogates may feel inclined to conduct mayhem in the real world, as a distraction, to send a message, or even to undermine consumer confidence in technologies to which some countries do not yet have access, and so on.

4. It was a revenge play: What better way to show you are pissed off at the US government in general and the NSA in particular than to wreak global cyber-havoc with malware that is openly enabled by code developed by the NSA, leaked code that the NSA refuse to barter for.

So what are the implications?

Each of these four scenarios seems plausible to me, but of course I'm going to refuse to speculate as to whether one is more plausible than another. What I will assert is that pondering the scenarios may help investigators consider the full range of possibilities as they seek to identify the perpetrators.

For more background on WannaCry and what you should be doing to protect your IT systems against it, see Michael's original We Live Security article. There is also a follow up article on We Live Security and more to come, so be sure to sign up for the email alerts.

If you are interested in thinking more about what it means for government agencies to handle malware, consider this article and attached peer reviewed paper, presented at the NATO conference on cyber conflict.

Monday, February 20, 2017

Getting to know CISOs: Challenging assumptions about closing the cybersecurity skills gap

Last year I wrote a dissertation in partial fulfillment of the requirements for my Master of Science in Security and Risk Management in the Department of Criminology at the University of Leicester in England. The title was: Getting to know CISOs: Challenging assumptions about closing the cybersecurity skills gap. The dissertation was submitted for examination in September of 2016 and in November it was approved by the examiners (who described it as ‘a meaningful and accessible, critically analysed report’ and also ‘a very pleasing piece of work’). I graduated in January, 2017.

That is when I decided to make the dissertation available to the public via the Internet and you can download it from here (PDF file). My primary motive for doing this is to enable any value that my work may provide – to the efforts to close the cybersecurity skills gap and advance the security profession – to be realized sooner, rather than later. After all, cybersecurity is a rapidly evolving field and many experts agree that the need to narrow the skills gap is urgent. Although the examiners said ‘elements of this dissertation are potentially publishable as journal articles and/or white papers’ I wanted to get the document out there in its entirety, and immediately. Of course, I may pull from, or build on, this work in peer-reviewed articles and white papers down the road, and it has informed several conference presentations that I have already delivered.

I should warn you that the dissertation is quite long – almost 25,000 words, although that count includes the 171 references. It runs to 68 pages but that includes screenshots of the survey instrument I used. Here is the Abstract to help you decide if you want to download the whole thing.

Tuesday, January 24, 2017

The Amazon Echo Dot echo effect: Alexa and the accidental dollhouse orders

Earlier this month I was involved in a technology news story that went a little bit viral, at one point threatening to become a virtual virus, self-propagating across the airwaves. This chain of events was created by voice recognition technology which is now being installed in millions of homes around the world. I have written about the technology on We Live Security, a site to which I urge you to subscribe if you are into all things cybersecurity. This article is the back story, which some may find interesting.

The heart of this particular thing/story was a spoken phrase, a phrase which you should avoid speaking out loud if you are within hearing distance of an Amazon Echo device like the one on the right. The phrase is: "Alexa, order me a dollhouse."

When the morning TV news program on station CW6 in San Diego reported that a young girl had accidentally used her parents' Amazon account to purchase a very expensive dollhouse via Alexa, the news anchor Jim Patton said: "I love the little girl saying ‘Alexa order me a dollhouse.’” As soon as Jim said that, the phones at the TV station started ringing. Viewers were calling to complain that their Alexas had tried to order dollhouses. In other words, a whole lot of people had been awoken to the fact that the current generation of Alexa devices will take orders from anyone: they use voice recognition technology to understand what people say, but not to distinguish who is saying it.

Later that Thursday morning, CW6 called ESET, the US headquarters of which are in San Diego, and asked if I could comment on this phenomenon. I said yes because I was already doing research on digital devices with voice recognition including, oddly enough a doll called "My Friend Cayla". Reporter Carlos Correa and I chatted for a while and a number of my comments about Alexa-type devices, but not all, were reported on air that evening. That story, which was a story about a story about Alexa, was rapidly syndicated and picked up around the world. Within a few days, the logo cloud of media sites that were quoting me looked a bit like this:
For the first 24 hours I was not aware that the story was spreading. Then I got a ping on Twitter from Oludotun "Dotun" Adebayo at the BBC. Could he talk to me in the early hours of Monday, his time, late Sunday my time? At that point I felt compelled to dig a little deeper into Alexa, starting with the installation process. At about 11PM on Saturday night I ordered the Amazon Echo Dot you see above. It arrived at 10AM on Sunday morning.

By the time I spoke with Dotun it was clear to me just how easy it was for someone to 'accidentally' buy something with an Amazon Echo. The magic word is not dollhouse, it could be drone or hoverboard; the "magic" is Alexa, which triggers a response from these devices. In the default configuration, the state of the system if you simply take it out of the box and plug it in following the installation instructions, is a. linked to your Prime account, and b. prepared to place orders with a simple verbal confirmation (using your "1-Click" settings as default payment method and shipping address).

And to be clear, Alexa will offer to ship you products even if you are not talking about buying something. For example, suppose you say, "Alexa, what's the best hoverborard?" The response will be a recitation of the product listing for the top rated hoverboard currently offered for sale on Amazon, immediately followed by an offer to ship it to you. If you say no, Alexa will then describe another product and offer to ship that. You need to say something like "Alexa cancel" or Alexa stop" to terminate the conversation. It so happens that the dollhouse ordered by the young girl that sparked the story was the second offering, suggesting that she had refused the first offer to send her a dollhouse.

Where does the story go from here? Hopefully, all Echo owners are now familiar with the "microphone off" button that stops the device listening (see picture on right - probably worth clicking before you go out, especially if you tend to leave the TV or radio on). And I'm sure many folks have been changing the default settings, turning off automated ordering or protecting it with a PIN.

At some point Amazon may enable two Echo features that could further reduce problems. First, allow owners of the devices to set a custom trigger word. At least that would enable you to talk about Alexa without waking her up. Second, but harder, would be to limit the voices to which Alexa responds, namely authorized users only. Of course, all of these things would add "friction" to the customer experience, which Amazon may be loath to do.

One question remains in my mind: Did Amazon ever consider that TV broadcasts would trigger the device? The CW6 experience was random, an accident. But if you intentionally broadcast the right words with the right timing you could trigger a mass ordering of products. And while Amazon has said it will accept returns of all 'accidental' orders, you can't use your Echo to cancel purchases. You have to go to the Amazon website or mobile app. Imagine a malicious broadcast that ordered expensive baby carriages, not the easiest things to return to sender. Does Amazon have an algorithm to detect that? Would some percentage of the orders be undetected until they turned up on doorsteps? How much would that cost in terms of dollars and good will?

And of course, buying things is not the only thing these devices can do. They can control thermostats and door locks and all manner of Internet of Things (IoT) devices. Pair that with the malicious broadcast scenario and you have some frightening possibilities. (I have been writing and talking about abuse of the IoT at We Live Security and other places.)

Tuesday, October 25, 2016

A quarter of a century of computer and network security research and writing

Twenty-five years ago this month McGraw-Hill published a book I wrote about computer and network security. And the first thing I tell people about this book is that I did not put the word "complete" in the title! That was the publisher's decision. Because if there was one thing that I learned in the three years during which I researched the book it was this: there will never be a "complete book" of security.

The second thing I tell people is that The Stephen Cobb Complete Book of PC and LAN Security was not a big seller. Indeed, it was a complete flop compared to some of the other books I wrote in the late 1980s and early 1990s. My best seller...

Thursday, October 13, 2016

More about the cybersecurity skills gap

[Update 2/25/17: now available, 68-page dissertation/report on the cybersecurity skills gap and the makings of effective CISOs.]

In October of 2016, I presented a paper titled "Mind This Gap: Criminal Hacking and the Global Cybersecurity Skills Shortage, a Critical Analysis." The venue was Virus Bulletin, a premier event on the global cybersecurity calendar that is particularly popular among malware researchers (for the story of how "VB" achieved this status, see below).

Papers and Slides

When your proposed paper is accepted by the VB review committee, you first have to submit the paper, then deliver the high points in a 30 minute presentation at the conference, which takes place several months later. In this case, the elapsed time between paper and presentation was very helpful because it allowed me to incorporate some of the findings from my postgraduate research into my conference slides, which are available for download here: Mind This Gap.

The VB conference papers are published in an impressive 350 page printed volume. However, the conference organizers have kindly given me permission to share my paper - which is only 8 pages - here on the blog:
As you may know, I've been studying various aspects of the cybersecurity skills gap this year, I put together a short white paper about the size of the gap:
Later this year I hope to publish the full results of my postgraduate research which looks at some of the assumptions behind efforts to close cybersecurity skills gap.

A note about Virus Bulletin

Monday, September 26, 2016

Email account breached? There's a website for that

Recent news that half a billion Yahoo accounts have been compromised has prompted me to again tell friends about a great website for exploring the effect of security breaches on your online accounts. The site is called: haveibeenpwned and I encourage you to explore it.

Friday, September 02, 2016

Surveys galore: cybercrime wave, government prodding, and more

One of the biggest problems with fighting cybercrime is knowing how much of it there is. If you or your organization have been a victim of cybercrime - and a recent study said that 80% of organizations have* - then you know there is too much of it. Indeed, another recent survey suggests that 69% of US adults agree their country is experiencing a wave of cybercrime.** This state of affairs has many people thinking that the government is not doing enough to fight cybercrime. How many? About 63% in a recent survey.***

And right there, in that short paragraph, you see how important it is to measure the problems you are trying to solve, whether it's "how big is that gap in the planking that's letting water into the boat?" or "to how big is that gap between the number of people we need to fight cybercrime and the current supply?" That latter question has been preoccupying me a lot this year and it's a tough one to answer, but that doesn't mean we shouldn't try. After all, this gap is causing serious problems for many organizations. According to a CSIS/Intel-McAfee survey more than 70% of enterprises had suffered losses that they attributed to lack of skilled security professionals.