Thursday, October 01, 2020

Cybersecurity Awareness Month: time to get smart about ending digital technology abuse

Graphic announcing 31 articles for Cybersecurity Awareness Month
.

Cybersecurity has become such an important part of modern life that many countries now dedicate an entire month—October—to increasing the levels of knowledge and awareness of cybersecurity among organizations and the general population. Here on this blog we have 31 articles about cybersecurity.

Not all of these articles will be traditional cybersecurity awareness content. Why? These days there is, already, a large amount of very good cybersecurity awareness material already out there, and even more will be published this month by companies, organizations, agencies, and experts. 

If traditional cybersecurity awareness is what you are looking for, I suggest you start with the resources on this website: Stay Safe Online. The Stay Safe Online website is run by a US-based non-profit, the National Cyber Security Alliance (NCSA). The NCSA coordinates Cyber Security Awareness Month activities in the US as well as the year-round STOP. THINK. CONNECT. online safety campaign

(Note: For much of the past decade I was closely involved in NCSA activities and served as a member of its board of directors on behalf of ESET, a founding member of STOP. THINK. CONNECT, and my employer from 2011 to 2019.)

On social media I will be pointing people to accounts like @StaySafeOnline and @Cyber to get the latest in this year's awareness month activities. These are being hash-tagged #BeCyberSmart (in previous years the hashtag #CyberAware was used).

For readers in the EU: "The European Cybersecurity Month (ECSM) is the European Union’s annual campaign dedicated to promoting cybersecurity among EU citizens and organisations, and to providing up-to-date online security information through awareness raising and sharing of good practices" (see the ECSM website for more).

Cybercrime Awareness Month?

If we step back a moment and ask why the world needs more cybersecurity awareness, an obvious answer would be "because there's so much cybercrime." That is why I think attempts to raise awareness of the need for cybersecurity need to include an explanation of why there is so much cybercrime. 

So, my focus this October is on the causes of cybercrime and other forms of digital technology abuse, the most problematic of the many challenges faced by cybersecurity. (Cybersecurity challenges that are not digital technology abuse include human error and acts of nature, like earthquakes and hurricanes.)

In a law journal article published at the beginning of this year I wrote: cybercrime is a global problem that negatively impacts everyone—from commercial enterprises to government agencies, non-governmental organizations, and the public—in every nation and territory. Multiple surveys in countries with high levels of Internet adoption suggest a high degree of concern that the risk of becoming a victim of cybercrime is increasing." Here is the chart that I provided to illustrate this:


This chart combines results from Stephen Cobb, ESET Cybersecurity Barometer, USA 2018, We Live Security, 2019, and EU Special Eurobarometer 480 Report on Europeans’ attitudes towards Internet security, 2019. 

Although my law journal article—Advancing Accurate and Objective Cybercrime Metrics—is written in the text-heavy format of that publication style, it does contain a wide range of statistics and sources that may be helpful if you want to research the question of how much cybercrime there is, and how the world currently goes about measuring cybercrime.

That article built on a variety of work I did about five years ago under the general heading "Sizing Cybercrime". One of the outputs from that work is watchable on YouTube in the form of a 25 minute talk with that title, recorded in Prague in 2015. There is also a 5,000 word paper to back that up, plus 50 references. Sadly, although I have managed to trim my weight a bit since then, the crushing weight that cybercrime imposes on human endeavors has only increased since then.

Small steps can reduce a big problem

The amount of criminal activity in cyberspace, that which involves computers and other internet-connected devices, may now be greater than the amount of purely physical crime in what I like to call meatspace. Yes, there are still meatspace burglars who break into houses to steal things and may hurt you if you get in the way. But the value of stuff that gets stolen from households by means of digital intrusions is probably a greater. One relatively recent academic study concluded that cybercrime accounts for “half of all property crime, by volume and value” (Ross Anderson et al. Measuring the Changing Cost of Cybercrime, 2019).

Given all those facts, you might wonder if there is anything at all that you—as an individual —can do to make a difference, to actually reduce the size of the cybercrime problem and improve cybersecurity in the world today. I am happy to report that there is, and some of the specific things that you can do will be covered during the month. 

Some of actions you can take to improve cybersecurity might sound trivial, but there is serious research that shows they work. Consider these meatspace examples: when more people park their cars in locked garages rather than on the street, fewer cars are stolen. Putting stronger locks on your doors makes your home less likely to be invaded than one with weaker locks. 

Of course those security measures imply availability of resources which are unequally distributed in most societies. But in cyberspace, some security measures are free, like choosing a stronger password to lock people out of your online bank account (covered on day 19). For example, look at the relative amount of computer effort, measured in time, that it would take to break each of these passwords:
  • mylittlepony = 3 weeks
  • mylittlepony! = 700 years
  • My1littlepony! = 200 million years
  • I adore my little pony = 42 sextillion years
If that inspires you to get to work on improving your passwords, then this blog post has been worth it (here is where I tested those passwords, and here is a good tool for exploring password strength). 

#BeCyberSmart

Thursday, September 24, 2020

A Brief History of Digital Technology Abuse: The First 40 Chapters

Digital technology, it's at the heart of modern life—our communication systems, our methods of travel and transportation, our education, entertainment, medicine, and much, much more—and it has a problem: we keep abusing it.
 

Graph of internet crime losses

You can think of this as a problem with the technology: it is inherently vulnerable to abuse. Or you can think of this as a problem with people: we keep exploiting those vulnerabilities for selfish ends. 

Either way, it is a big problem, one that keeps getting bigger.

[Insert standard paragraph full of statistics documenting the undeniable rise of technology abuse despite record levels of spending to prevent such abuse — including at least one graph to help visualize this trend and cite source—and remind readers the author has published peer-reviewed papers on this topic.]

Sadly, some people who develop new digital technology products continue to behave as though this problem doesn't exist, or if it does, it's not a big problem, and besides, it will soon be solved so that we can all enjoy the benefits of whatever new technology these people are bringing to market. 

It is for these people—the technophilic "an app can fix that" uber-optimistic, techbro' solutionists—that I have been sketching out a brief history of digital technology abuse. Here's a screenshot of the first 40 chapters:


[I apologize for using a screenshot and not a text-based table that folks can copy and paste (have you tried building a table in Blogger?). However, an easy to grab text list, in roughly chronological order, is included at the end of the article. Also, the table above should be read column-by-column, left to right, top-to-bottom.]

The idea is that each chapter in the list is a technology that has proven vulnerable to abuse. (You can play mix-and-match with these, for example, email is abused to distribute documents containing macro technology that is abused to infect personal computer systems with malicious code that abuses attached digital cameras to capture embarrassing images and threatens to share them through abuse of social media.)

Of course, you may take one look at this table and realize some technologies are missing. Indeed, you may want make your own list, and I think that's a great idea. My list is somewhat random and clearly not definitive. I don't apologize for this because a. I was in a hurry, and b. any attempt at a complete list would be too long for a brief history of digital technology abuse.

The Digital Technology Product Warning

The goal of the 40 chapter list is to challenge people to name one or more digital technologies that are not vulnerable to abuse. (To be clear, I can't think of one.) And if there are none, then I would argue that every new piece of code-based or code-enabled technology must now come with a warning, a warning that has to be included in any discussion, reporting, or promotion of that technology. The warning should read something like this:
This product includes digital technology that is vulnerable to abuse which could cause harm or injury, including but not limited to failure to function correctly, loss of privacy, and reduced security.
I am sure some people will object when governments start proposing that such warnings must appear prominently on existing products, and be included in any reporting of soon-to-be-released products. One likely objection is that: "There's no way you can prove our product will be abused." 

The counter argument is: "there's no way you can prove your product is immune to abuse, but there is a very long history of digital technology products being abused." (Insert handy reference to "A Brief History of Digital Technology Abuse: The First 40 Chapters," S. Cobb.)

Of course, savvy readers will know that many of the digital technology products upon which we have come to rely for the smooth running of our daily lives already include warnings. The problem is that these are not very prominent. Indeed, they are often buried deep within the manual. However, poke around and you will find that any product that runs code comes with a warning like this: 
This product uses software that is provided 'as is' without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability and fitness for a purpose. In no event shall the supplier of this software/product be liable to you or any third parties for any special, punitive, incidental, indirect or consequential damages of any kind, or any damages whatsoever, including, without limitation, those resulting from loss of use, data or profits, whether or not the supplier has been advised of the possibility of such damages, and on any theory of liability, arising out of or in connection with the use of this software.
So, for example, the next time you go to unlock your car with your phone and find—as thousands of Tesla owners did recently—that this feature isn't working, well, too bad. You were warned. You have no legal recourse. That's just the way it is. If you check the Tesla documentation I'm sure you will find language like the paragraph above. (You might also find that the same language applies to the self-driving software—I don't have a Tesla handy or I would look myself, but not while driving.)

The point is, even a brief history of digital technology abuse should be enough to prove that humans have been developing new technologies faster than they have established appropriate ethical norms within the societies into which these technologies are deployed. I believe there is an urgent need for us humans to get serious about monitoring and controlling technology development and deployment in ways that facilitate closing the technology-ethics gap. 

We can think of this gap as: "a mismatch between the value rationality of our ends and the instrumental rationality of our means." That's a quote Phil Torres in Chapter 6 of his excellent book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks (available on Amazon and at Powell's, etc.). 

Another way of putting it comes from Swedish-American physicist Max Tegmark, as quoted by Torres: "A race between the growing power of technology and growing wisdom with which we manage it." There's no doubt in my mind that:
  • the race is on
  • it's a marathon and not a sprint
  • it's probably going to be a multi-generational relay
  • it's the most important race for the human race
  • right now we are not looking like winners
I will be returning to this topic, but for now I'm off to the mental gym to do some circuits. I'll just leave the text of the chapter list for A Brief History of Digital Technology Abuse right below here.

Sunday, September 20, 2020

The "Insider Plus" threat: what the Tesla and Twitter attacks say about the resurgence of an enduring risk

Image of logos suggesting threats are insider or outsider or bothThe "Insider Threat" to information system security is as old as computers, but in recent decades it has received less attention than external threats; yet there is reason to believe that the risk posed by insiders acting on instructions from outsiders may be on the rise; we can usefully refer to this as the "Insider Plus" threat. In my assessment, the number of organizations that are fully aware of, and well-prepared to defend against, this insider plus threat is problematically small. 

That's the short version of this article, which explores the implications of recent security incidents at Twitter and Tesla, finding them indicative of several different-but-related phenomena that suggest the insider plus risk will increase over time. I have also provided some hopefully useful background on insider threats.

Twitter, Tesla, and Three Things True in 2020

Reporting in July on the attack that resulted in the hijacking of Twitter accounts belonging to high-profile individuals and brands, CSO Online described it as: "the perfect example of the impact a malicious or duped insider and poor privileged access monitoring could have on businesses." (Twitter VIP account hack highlights the danger of insider threats).

The next month, Government Tech reported on "an alleged million-dollar payment offered [to an insider] to help trigger a ransomware extortion attack" on the Tesla electric car company. This appeared in Dan Lohrmann's extensive piece on ransomware during Covid 19 where he quotes Katie Nickels, the director of intelligence at security firm Red Canary: 

"It really changes the game for the defenders. Before today I would not have suggested companies include an insider attacker installing ransomware in their threat model. Now everyone has to shift their thinking. If we know about this one case that’s been documented, there might be more."

I'm willing to bet there have been more, if only because this type of attack is a natural outcome of three currently observable phenomena:

  1. Some organizations have become adept at defending against external attackers.
  2. Very hard times, such as a global pandemic, make some employees very susceptible to unethical conduct.
  3. The ethical status of abusing access to information systems remains vague and/or malleable in the minds of many humans.

Consider This Scenario 

You want to extort a company with deep pockets that relies on computer systems that you know you can disable with code in your possession, but the company is doing a good job of preventing external access to those systems; so you decide to get an insider to help you. There are numerous ways of doing this, including but probably not limited to: 

  • A monetary bribe: which might be particularly effective right now, given the current levels of economic hardship and uncertainty.
  • A chance at fame: which may appeal to some individuals for whom abuse of digital technologies is a sport or side gig or a form of protest (all of which can be said to be enabled by ambiguities in the ethics of technology). 
  • A promise not to reveal embarrassing or damaging information: also known as blackmail, potentially facilitated by unauthorized access to devices and accounts belonging to the targeted insider. 
Given the plausibility of this scenario, every company needs to check its approach to data privacy and cybersecurity to make sure it addresses the risk that an external attacker may "partner" with an insider. Clearly, privileged access monitoring needs to be in place and in use, but so does management's awareness that insiders may be more susceptible to breaches of IT security policy and criminal statutes during this pandemic.

Consider This Bibliography

Anyone seeking a deeper understanding of insider threats will benefit from reading insider case studies, such as those aggregated by the CERT Insider Threat Center (Cappelli, Moore and Trzeciak, 2012). The Center has documented hundreds of internal computer crimes that impacted companies in sectors like banking (Randazzo, Keeney, Kowalski, Cappelli, and Moore, 2004), information technology and telecommunications (Kowalski, Cappelli, Moore, 2008), critical infrastructure (Keeney, Kowalski, Cappelli, Moore, Shimeall and Rogers, 2005), and financial services (Cummings, Lewellen, McIntire, Moore, and Trzeciak, 2012). 

While the primary goal of the Center was to discover and disseminate practical methods of mitigating insider threats, the case studies are analysed according to academic standards; for example, methodological limitations, like the inability to generalize findings to all organizations, are duly noted (Cappelli et al, 2012). These studies reveal how a wide range of insiders exploit opportunity to commit crimes, often through a simple betrayal of the trust placed in them as employees or contractors. 

Some insiders may, like Edward Snowden (Poitras, 2014), have far-reaching “super-user” access to the organization’s assets, be they physical or digital; yet CERT has recorded many cases where the crime was committed by an insider with few technical skills and only limited access. These studies document how even limited trust can, if betrayed, enable criminal activity. It may be theorized that such betrayal, by colleagues and co-workers, chosen by management to work at the company, and of whom there is at least a minimal expectation of trustworthiness and shared interests, may have a greater negative psychological impact than the criminal act of an outsider, a person of whom there are no pre-existing positive expectations. 

As I noted in my master's degree essay—from which the preceding three paragraphs were taken—the threat of betrayal by trusted insiders is real, for there can be no doubt that the following is true: "never before have so many insiders had so much access to so much computerized information of such great value." 

Furthermore, never have there been so many ways to monetize—often at relatively low risk— unauthorized access to information systems and the information they process and store. What strikes me as particularly worrying right now is the potential for malefactors to adopt increasingly aggressive meatspace crime tactics in their quest for access to protected systems.

I will be discussing this further and providing links here.

#InsiderPlus

Wednesday, August 05, 2020

Quick update: malware, cybercrime, cybersecurity, and a worrying lack of trust in tech firms

Photo of Journal of National Security Law & Policy with Stephen Cobb smiling
As John Oliver—that other bloke from the West Midlands—might say: "Just time for a quick update."

This one's about what I've been writing and talking about recently, as in "during lock down," or "while sheltering in place." In addition to the articles posted here on the blog, I have published items on Medium and LinkedIn. I have also been quoted by Bleeping Computer and Business Insider.

And yes, my first law journal article—part of a research project that began over a year ago—finally appeared in print! As you can see from the photo on the right, I am quite pleased about this (that's my happy face). 

In addition to several copies of the journal, I received a stack of professionally bound copies of my article, nice to hand to friends and colleagues or students or clients attending in-person seminars, IF the world ever gets back to doing that sort of thing. Yes, I could pop them in the post—but the US mail service is struggling at the moment (see this LA Times editorial: Attacking the U.S. Postal Service before an election is something a terrorist would do).

In the meantime the Journal of National Security Law & Policy has graciously made the article available online: Advancing Accurate and Objective Cybercrime Metrics.

Lack of trust in tech firms starts to bite

I have long been concerned that a constant drumbeat of headlines about cybercrime attacks and data privacy breaches could undermine technology adoption and use (not to mention the debilitating effects of those attacks on people and organizations victimized by them). In recent years that drumbeat tended to drown out voices like mine warning that this was a problem. So, as a sort of sanity check I conducted a couple of surveys.

The results? Right now there appears to be a serious trust deficit and I wrote about this on Medium: Not even 10% of us trust tech firms to protect our personal information. My hot take? "This could be a big problem for current efforts to recruit technology to solve a range of problems created by the COVID-19 pandemic." In other words, "Deploying technology to tackle a pandemic—or any of a range of "tech-to-the-rescue" challenges—can quickly become problematic if people don't trust tech firms to protect their personal information."

I wrote more about the survey results on LinkedIn: What's next if only 9% of us trust tech firms to protect our personal information? (Why write about the same topic in more than one place? I'm trying to determine which platform works best for different topics and perspectives.)

Malware, Cybercrime, and COVID-19

Something else I posted on LinkedIn was a look at the relationship between the coronavirus and criminal activity that employs malicious code: The Covid Effect means we can no longer ignore the Malware Factor. For this topic I did something I've never done before: I created a narrated version and put it on YouTube. This has not been wildly popular, but I'm going to try it again with some other articles, mainly so there is a spoken version of the work.

Open society, open-source, open to attack

Obviously I'm someone who sees a lot of troubling things in the world—and who has been that way since long before the current pandemic—but I'm not blind to hopeful signs. One category of such signs is editors willing to commission writing about difficult topics, and journalists who rise to the challenge. An example is this article by Ax Sharma in Bleeping Computer. public safety systems can be abused by nation state actors

According to Stephen Cobb, an independent security researcher based in the UK, the growing use of remotely-controlled and autonomous vehicles for public safety and surveillance opens up a worrying new set of attack vectors and opportunities for criminal abuse.

"A few years ago, I coined the term jackware for a category of malware-based attacks that include hijacking of self-driving cars, but this can also apply to autonomous or remotely-controlled vehicles—in the air or on land—that are deployed for public safety purposes."

"Just as a police car or ambulance can be turned into a weapon, so can a surveillance drone or security robot. Use of autonomous or remotely-controlled vehicles for public safety is a troubling new attack vector because this technology is not in my opinion sufficiently shielded from abuse," said Cobb.

Commenting on the state of affairs we have seen in the past two decades, Cobb additionally expressed how cybersecurity efforts are frequently not prioritized for attack vectors like these until grave consequences occur.

"Detailed historical analysis of previous technology deployments strongly suggests that appropriate levels of protection will not be put in place until malicious abuse occurs at scale."

No Surprise: China Blamed for 'Big Data' Hack of Equifax

Just before lockdown I spoke to Mathew Schwartz for an article in Bank InfoSecurity, but I only just realized that I had supplied the "pull quote" near the top of the article
"Absent major progress toward international norms in cyberspace, crimes like this will continue to be committed." 
Given that this is something I firmly believe, and also something that I believe to be very important, it was exciting to see it given some exposure.

Is COVID-19 Driving a Surge in Unsafe Remote Connectivity?

In March, I was quoted in another Mathew Schwartz article, this time on the lockdown-driven increase in remote access to systems. Here's part of what I said: 

"Past experience predicts that a significant percentage of that [recently enabled] access will be weakly protected at best, and we know that criminals have a wide range of tools at their disposal to take advantage of such access, whether for extortion, ransomware, data theft or sabotage...Sadly, the sense of being 'all in this together' in the fight against coronavirus is not felt by all criminals, and some will have no ethical qualms about abusing RDP for profit regardless of the impact on victims."

The Amazon Dot Com of Cybercrime

Also in March, journalist Jeff Elder from Business Insider called me about the arrest of "Russian Cyber Hacker Kirill Victorovich Firsov." Allegedly, Firsov ran an illegal online marketplace called Deer.io that was "selling usernames and passwords from around the web." I characterized Deer.io as "an Amazon dot com of cybercrime." 

The article began like this: "When the FBI arrested the alleged leader of an illegal online marketplace last week, they may have made a small dent in what one expert calls "the Amazon.com of cybercrime." Here's the part that cites me directly: ""This is the Amazon.com of cybercrime, with easy-to-use, easy-to-access availability and participation – as a buyer or vendor," says independent threat researcher Stephen Cobb, who previously tracked illegal marketplace activity for ESET, a cybersecurity company."

I wrote up the backstory to this term and the phenomenon it describes on this blog, including a look back at that time I took Marketplace host Kai Ryssdal on a guided tour of the dark web "to demonstrate why cybercrime is easier than ever before."

Wednesday, July 15, 2020

Time to flatten the [cybercrime] curve

Recently, a journalist asked for input on this question: Do you think companies are doing enough to protect consumer data?

In my response, I pointed to the chart you see on the right, the graph that I call the IC3 Hockey Stick of Cybercrime. What the graph shows is internet crime losses reported to the Internet Crime Complaint Center (IC3) operated by the FBI. The X-axis covers a 17 year span, but the last five years are where things start to look truly troubling.

(Note: while IC3 is the source of the numbers in the graph, IC3 has not—to my knowledge—published them in a graph, in other words, I built the graph from their numbers.)

Back in January of 2020, even before the "Covid Effect" kicked in—a huge surge in computer-enabled crime that began to emerge in late February—I predicted that the 2020 numbers from IC3 will blow past the $4 billion mark. Then, in early March we started seeing articles on "How cybercriminals are taking advantage of covid-19: scams, fraud, and misinformation." By mid-April, FBI Deputy Assistant Director Tonya Ugoretz was saying the number of crimes reported to IC3 had "quadrupled compared to months before the pandemic."

While the methodology behind the IC3 numbers shown in this chart is likely to disappoint statisticians—an issue I covered in depth in this law journal article—the trend you see here is consistent with all the other measures of cybercrime that I have studied. And while the tall thin version of the chart exaggerates the effect, it still doesn't look particularly reassuring when you produce a squarer version like this one.

The sad reality is—as I said to the journalist—companies are falling short in their efforts to protect data and systems relative to the level of threats they face, from criminals and other threat actors.

That caveat is important: relative to the level of threats they face, from criminals and other threat actors. You can only give companies so much grief for falling victim to crimes before you're just victim blaming. And victim blaming can lead to punitive measures against organizations that failed to prevent themselves from being victimized.

(I'm sure some organizations that have fallen victim to cybercrime feel as though they've been penalized by the authorities for suffering a burglary while living in a neighborhood where police presence is practically non-existent.) 

Yes, some companies have been victimized because their security practices were not to the highest standard, but that standard is very expensive to maintain at current threat levels. Some companies—like defense contractors—may be able to get their information security costs covered by the prices they charge. Others have to pass them on to consumers. 

Either way, one thing is clear, the more cybercrime there is, the higher the cost to society at large. The counter-argument that profits from cybercrime bolster the economies of those places in which cyber-criminals spend their money, doesn't impress me. I suggest that the benefits of those ill-gotten gains are not outweighed by the social costs of creating crime-based economies.

So let me put it this way: given that our efforts to defend against such cyber-criminal activity have so far proven to be incredibly expensive relative to the limited success achieved, discouraging people from engaging in cybercrime should be our number one priority in terms of government policy and social strategy. So how might that work?

Curve flattening

As the number of COVID-19 cases started to rise in the first quarter of 2020, the governments of many countries urged all sectors of society to work together to "flatten the curve" of Coronavirus infections and this has become a key element in the global response to the pandemic. 

This type of public call to action—the "we're all in this together" strategy—has long been a theme of "cybersecurity awareness" efforts, recurrent campaigns to enlist the public's help in reversing the rise in cybercrime, essentially flattening the cybercrime curve.

The best known of these cybersecurity awareness programs has become an annual event, taking place in October as Cybersecurity Awareness Month in the US and Cybersecurity Month Europe in the EU, with many countries outside those areas also participating. 

While much of the Cybersecurity Awareness Month activity is voluntary or sponsored by companies, government agencies in US and the EU have provided leadership in establishing these programs. That leadership is a good sign, and there's no doubt in my mind that security awareness programs do reduce the number and impact of security breaches, thefts, extortion schemes, fraud, and so on. 

But I am equally certain that the governments of the world are doing far too little to make cybercrime a risky and unattractive proposition. So, maybe the next large-scale public cybersecurity awareness program needs to be:

"Awareness of how governments have seriously failed their citizens when it comes combating and deterring cybercrime, and how we can force them to do better."

Needs a catchier title—but I will work on that.

Tuesday, June 30, 2020

Taking down 'the Amazon of cybercrime' - a look inside a dark web story

Ads for websites that sell stolen payment card data and online accounts

Back in March, 2020, as the coronavirus pandemic began to dominate the news, one cybercrime story seemed to get washed aside by the rising tide of COVID-themed cybercrime attacks: the taking down of the 'Amazon.com of cybercrime.' I'm fairly sure that, in more normal times, more people would have paid more attention to this headline:
The FBI arrested the alleged hacker behind the 'Amazon.com of cybercrime,' which it says sold $17 million worth of stolen accounts for Gmail and other sites
For me, there were several reasons to smile when this headline appeared, not the least of which was the fact that it represents a very positive step forward for law enforcement in the ongoing effort to rein in cybercrime, an effort I have tried to support for many years.

On top of that, I happen to know Special Agent Brian Nielsen, whose very impressive work is cited in article and in the criminal complaint filed in US District Court in San Diego. The complaint named Kirill Victorovich Firsov as "a Russian cyber hacker, and the administrator of the Deer.io cyberplatform,"

Firsov was arrested on a Sunday night in March at JFK Airport and the complaint was unsealed the next day (a PDF of the complaint is here and if you look at the timing it suggests there was some very fast and skillful foot work by the San Diego feds).

However, the aspect of this headline that really put a smile on my face was the term Amazon.com of cybercrime. This way of characterizing dark web crime markets—like those that Firsov enabled—is something that I came up with in 2018; for example, see this article: Next Generation Dark Markets? Think Amazon or eBay for criminals.

When journalist Jeff Elder from Business Insider called me about the Firsov arrest I used that same characterization. Jeff obviously found it helpful because the article began like this: "When the FBI arrested the alleged leader of an illegal online marketplace last week, they may have made a small dent in what one expert calls "the Amazon.com of cybercrime."

That expert was me. You can read the article here (apparently MSN has a "reprint" arrangement with Business Insider—the article is now pay-walled on the latter's site). This is the part that cites me directly:

"This is the Amazon.com of cybercrime, with easy-to-use, easy-to-access availability and participation – as a buyer or vendor," says independent threat researcher Stephen Cobb, who previously tracked illegal marketplace activity for Eset, a cybersecurity company.

Apart from Eset being ESET, Jeff was true to our conversation and the point I was trying to make. My efforts were undoubtedly bolstered by the fact that I had prior experience—from early 2019—covering this topic with another journalist, Kai Ryssdal from Marketplace on NPR. That meant I had quite a bit of "evidence" that I could share with Jeff. Like this screenshot of an online market, annotated here for educational purposes:


As you can see, markets like this make buying stolen payment card data as easy as buying something on eBay or Amazon. And of course, they provide an easy way for the criminals who do the data stealing to monetize their operations. Like any well-organized market there are incentives—like seller and product ratings—to ensure that shoppers get good products at competitive prices.

These Amazon-style mechanisms help to explain how a bunch of criminals can buy and sell things without ripping each other off, as does the use of digital currency and an escrow system. The marketplace provider withholds payment to the seller until the buyer gets the goods and approves them.

By charging a fee for escrow and other services, the marketplace provider stands to generate considerable revenue while maintaining a semblance of respectability as "merely enabling commerce." (That is just one of many ethical cop-outs that help to sustain cyber-criminal activity.)

Flashback: Kai goes to the Amazon (of cybercrime)

So, how did I end up to talking to journalists about dark markets like this? Journalists who cover breaking news like to talk to people who are considered experts in the field of human endeavor to which the news pertains. Some experts welcome such conversations as an opportunity to provide context and clarity to complex topics, thus helping to broaden understanding of such topics.

If the expert happens to be self-employed and short of funds for marketing and PR, this interaction can be mutually beneficial. It can also be helpful to companies who are interested in "educating the market" for their products, which is why ESET—a maker of security software—was happy for me to work on this when I worked there (disclaimer: I no longer work for ESET and have zero financial ties to the company; I think they make good products but I know I make no money if you buy them).

The radio piece that I did with Kai Ryssdal about the business of cybercrime and the online markets that support it was skillfully orchestrated by Maria Hollenhorst. A lot of preparation was needed to produce a segment that was relatively short, but full of information. I thoroughly enjoyed working on it and was very impressed with how quickly Kai saw what I was hoping he would see: that the dark web enables "crime as a business enterprise," complete with Amazon and eBay style marketing techniques. So please enjoy listening to: Ever wondered what the dark web is like?

Thursday, May 21, 2020

Only 7.6% of Brits say they trust tech firms with personal information, and that's a problem for all of us

So, it’s May, 2020, and we humans are struggling to cope with a global crisis of unprecedented scope and scale, despite having unprecedented levels of technology at our disposal. Why are we struggling? One factor could be this: less than 10% of adults in the US and UK trust tech firms to protect their personal information.

That's according to a survey I commissioned around the middle of the month, a full account of which can be found in this article on Medium.

To rein in COVID-19, and future pandemics, people need to be able to share their personal information without fear that it will be misused or abused.

I think this pie chart reflects that fear. It shows the US results but the corresponding UK pie chart looks very similar: very few people say Yes when you ask them this question: “Do you trust tech firms to protect your personal information?”

Respondents could answer Yes, No, or Not sure. Less than 1 in 10 respondents answered Yes (7.6% in the UK, 8.9% in the US). More than half said No (55%). Just over one third said Not sure (36%). Who were these people? Adults in the US (n=756) and the UK (n=514).

Why is there such a lack of trust? I think that the Malware Factor has a lot to do with this. People don't trust tech firms to protect personal information because of the massive scale at which malware has enabled such information to be compromised and abused. Companies and governments just don't seem to have the ability to prevent this, either because of a shortage of concern or funds or skills or understanding, or an overabundance of criminal activity, or all of the above.

Ok, but what can we do about this?

My own opinion is that the overabundance of criminal activity, while not the whole problem, is a huge part of the problem. Yes, it's true than many organizations could do better at cybersecurity, but it's also true that the governments of the world have massively failed their citizens when it comes to malware-enabled cybercrime. This failure is so huge that it's now compounding the problems created by a deadly pandemic. Maybe, now that lives are very clearly on the line, more people in positions of power and influence will begin to take the Malware Factor more seriously.

But what would that look like? How does taking the Malware Factor more seriously at the highest levels translate into action? I'm going to list three suggestions. You may not like them. You may even scoff at some or all of them. But I'm already used to that, as I said in this blog post and Medium article from 2017 (same story, two different places). FYI, I'm still fairly sure I'm right.

1. International cooperation and global treaties are the only way to make a serious dent in cybercrime and cyberconflict, and the citizens of the world should push their governments in this direction. I realize this is going to be hard while three of the biggest malware-making countries are still run by Trump, Putin, and Xi, respectively—but that is no reason not to try.

2. Cybersecurity products and services should be made available at lower or no cost.

As I've been saying for more than a decade now, information system security is the healthcare of IT/ICT. Just as profit-based healthcare is, in my opinion and practical experience, a bad idea, so is people making large fortunes from protecting the world's digital infrastructure—as opposed to a decent wage. Besides, a profit-based approach to securing ICT has thus far failed to make any lasting dents in the cybercrime growth curves (see chart of Internet crime losses, from this IEEE blog post by Chey Cobb and myself). 

3. We need to consider an end to broadcasting and bragging about new and interesting ways to gain illegal access to information systems. Justifying this as a way to improve security and reinforce the message that it needs to be taken more seriously might have been valid at some point in the past, but that validity has been seriously eroded. Fully open, freely accessible, in-depth research on things that enable ethically-challenged individuals or governments to seriously undermine our collective future is not, in my opinion, a good idea. (Think of someone making and distributing a version of COVID-19 that doesn't give victims a tell-tale cough—cool?)

I'm happy to hear more suggestions, or your thoughts on what's wrong with these. Also happy to hear about any moves in these three directions. (I am already familiar with the work of the Global Commission on the Stability of Cyberspace—still hoping they take up the idea of an Comprehensive Malware Test Ban Treaty.)

Note: If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and fuel more independent, vendor-neutral writing and research like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

#cybercrime #dataprivacy #privacy #infosec #FTC #FCC #COVID_19 #Covid19UK $FB $AMZN $AAPL $GOOG $MSFT #technology #trust #survey 

Saturday, May 09, 2020

Defcon 2020 Cancelled: Can sad news also be good news?

Now with audio!

Talk about mixed emotions! Large swathes of the hacking and information security world are feeling all kinds of sad-and-yet-glad right now. Why? Because, as of May 8, 2020, this year's Defcon is canceled. This was to have been the 28th consecutive Defcon, a very popular annual hacking conference that is traditionally held in Las Vegas around the start of August.

It was also going to be an anniversary event of sorts for me. The canceled Defcon was to have been the 25th anniversary of my first Defcon. That was in 1995 and was known as Def Con III as you can see from the t-shirt.

Looking on the bright side, Defcon 29 in 2021 is already scheduled, as a meatspace event, for August 5 to 8 (see WIRED article). But the main piece of good news is the very thing that many folks—myself included—are also sad about: we won't be seeing each other this August, at least not in person.

There is more goods news: this year there will be a virtual conference. I know that not everyone enjoys this format, but I am pleased that his path was chosen. I am also grateful the hacking community has made a very difficult, yet also very sensible decision: let's not risk spreading COVID-19 by gathering in person in Las Vegas hotels in our tens of thousands to spend several days in packed talks and crowded corridors (estimated attendance last year was 30,000).

And there's even more good news, from way back in the 1990s. Back then, Jeff Moss—the founder of the event—had the wisdom and the foresight to insist that the talks delivered at Defcon be archived. That means anyone with spare time on their hands and an internet connection—maybe in a locked-down-shelter-in-place scenario—can binge on past events.

That also means people can still listen to what I said, 25 years ago, preserved as an audio (.m4b) file. Just scroll down this page: DEFCON III Archive. My talk was titled: The Party's Over: Why Hacking Sucks. My goal was to generate dialogue about the ethics of hacking, and I think I succeeded. In fact, the audio captures that quite well. (Bear in mind that this was 1995—I spoke at numerous events in the twenty-teens where organizers seemed incapable of capturing and curating audio this efficiently.) Click this link to listen in your browser; it's about 49 minutes long and while the sound starts out rough, it gets better quickly.

As someone who had been working on the computer security problem since the 1980s, I have to say that I learned a lot from that 1995 session and really appreciated everyone's input. I was invited back the next year and my talk was about how people might go about transitioning, from hacker to infosec professional. Of course, like many early DEFCON talks this one went in several other directions at first—there was even a steam train excursion—but you might still enjoy listening. Here is a link to that talk. Be warned that there is some swearing, but it is in a very polite voice.

Over time, the Defcon archives have evolved to become a quite amazing cornucopia of knowledge and history, a feast for eager minds, and a legacy for future generations.

Thanks Jeff! Thanks to your foresight, it's possible to find some good news in this sad news.

Friday, April 03, 2020

The Malware Factor: The biggest problem our postdigital world has refused to face, so far

Image for The Malware Factor, background based on screenshot of a frame in BBC documentary Hidden Life of the Cell

I believe that the misuse and abuse of information and communications technology (ICT) threatens to undermine all present and future human endeavors, from raising children to reining in pandemics.

[Update May 14, 2020: a longer version of this article is now available here, with narration. You can also find it on YouTube.]

I find it helpful to think of this phenomenon as “the malware factor,” mainly because it is enabled by, and embodied in, malicious software, or malware. The following is an attempt to explain this point of view.

A pandemic example

In one of his less empirical moments, the English philosopher Francis Bacon wrote that "prosperity doth best discover vice, but adversity doth best discover virtue." Clearly, this was said before the invention of email and its subsequent perversion by morally-challenged humans bent on leveraging adversity at scale.

As any information security professional will tell you, when people are stressed by the struggle to cope with a crisis—a global pandemic, for example—they are more likely to click links that lead to scams. Of course, COVID-19 has led to many examples of virtue, but it has also sparked a global surge in digitally-enabled vice, a.ka. cybercrime, a.k.a crime.

(As I have said elsewhere, in a postdigital world, the term cybercrime is of limited utility. While we cannot say—yet—that all crime is cybercrime, just about all crime has cyber elements.)

Fortunately, some of the fine folks working to keep at bay the surge in digitally-enabled COVID-19 vice have been documenting the situation. By March 12, Alex Guirakhoo, research analyst at Digital Shadows had already catalogued a sickening array of technology abuse in a lengthy blog post titled How cybercriminals are taking advantage of covid-19: scams, fraud, and misinformation.

Guirakhoo opens with an observation that has been true since at least September of 2001: "In the wake of large-scale global events, cybercriminals are among the first to attempt to sow discord, spread disinformation, and seek financial gain." He goes on to explain the implications of this twenty-first century reality:
"While COVID-19 itself presents a significant global security risk to individuals and organizations across the world, cybercriminal activity around this global pandemic can result in financial damage and promote dangerous guidance, ultimately putting additional strain on efforts to contain the virus."
While I might have said immediately instead of ultimately, Guirakhoo accurately framed the problem, one problem that is far more serious than most people realize, with implications very few have been willing to face—although I am hopeful that this is about to change.

Factoring in Malware

The current reality is that large-scale global events—as well as many regional and even personal human endeavors—are negatively impacted by unwanted human activity in cyberspace, activity that is enabled, at a fundamental level, by malicious code.

This is true of events or endeavors that take place in meatspace, or cyberspace, or both. For example, the physical distribution of medicine and equipment to contain a pandemic is negatively impacted, as is the strategy of having people use computers and the Internet to work from home to contain a pandemic.

Before digging deeper into the definition and role of malicious code in this current reality, let me address why I think it is helpful to refer to this reality as postdigital. These easiest way to do this is to quote Professor Gary Hall, Director of the Centre for Postdigital Cultures at Coventry University:
the ‘digital’ can no longer be understood as a separate domain of culture. Today digital information processing is present in every aspect of our lives. This includes our global communication, entertainment, education, energy, banking, health, transport, manufacturing, food, and water-supply systems. Attention therefore needs to turn from the digital understood as a separate sphere, and toward the various overlapping processes and infrastructures that shape and organise the digital and that the digital helps to shape and organise in turn.
There is no need for me to restate what Hall says there; I agree that we need to acknowledge that "the digital" is now part of our lives and life on Earth, whether we like it or not (and to be clear, while "going off the grid" can minimize your interaction with the digital, it is still a part of your world—just check the night sky if you don't believe me).

Which brings me to these three assertions:

1. the misuse and abuse of information and communications technology (ICT) threatens to undermine all present and future human endeavor, from raising children to reining in pandemics; and,

2. it is helpful to refer to this as “the malware factor” because it is enabled by, and embodied in, malicious software, or malware, and embedded in the infrastructure of our postdigital world.

3. The use of malware by criminals and governments during the COVID-19 pandemic is prima facie evidence that our postdigital reality is based on code, abuse of which is impossible to prevent.

Still image of virus components in a human cell from Hidden Life of the Cell
Virus components in a human cell (BBC)
I am going to end this piece right there, and leave it right here, with this coda: I'm not wedded to "The Malware Factor" as the name for this phenomenon, but before you discount it, please know that I have more to say on this, and it involves cells and viruses and infrastructure, and maybe a few passages from Genesis (the religious text, not the band).

In the meantime, it might be helpful to watch this BBC video: Secret Universe: The Hidden Life of the Cell (warning: contains scenes of simulated violence between a virus and a human cell; may be geo-fenced, so here is an alternative source and also here).

And finally, here's a friendly reminder that, if Earth's leaders continue their pathetic track record on reining in malware, it will become a problem on Mars too. That's assuming humans make it to Mars safely, which I think is unlikely given the #MalwareFactor.

Note: If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and fuel more independent, vendor-neutral writing and research like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Monday, March 30, 2020

Crime in the time of coronavirus: be wary of windfalls and refunds, even those that don't look pandemic-related

URGENT: Please click this link to claim your refund. 

Don't worry, that's not an actual link, but you will probably be seeing emails and texts with links like that in the coming weeks. At a time when many people could use a little extra cash, the temptation to click those links can be strong.

Scam text messageHere in this screenshot you can see one such message came to my iPhone today, supposedly from the UK government office that handles driving licenses, the DVLA.

The links in these messages take you to forms where, in order to get your refund—or other promised payment—you type in your bank account or credit card details.

Sadly, some people will click those links and supply those account details. (The form you see when you click on that link looks quite realistic - see below). Some time later those people will discover that criminals are helping themselves to the account, transferring funds out of bank accounts, running up charges on credit cards.

And criminals are betting that more people are more likely to click those links today than they were just a few months ago, in the time we now know as B.C. (Before Coronavirus). Why? Because right now people are worried about running short of money and thus more susceptible to scams like these. It's all part of a well-tested criminal strategy, one that has been used to generate ill-gotten gains for decades: exploit the times in which we live.

For example, back during the Great Recession of 2007-2009 I got several calls from otherwise sensible friends asking if some scam or other might just be real. They were hoping that a sudden windfall might really come their way, wishing that an unexpected source of funds might actually materialize. Criminals know these hopes and wishes and exploit them.

Tough times breed twisted crimes!

Of course, when the coronavirus first started to be a hot topic, criminals tried to exploit our eagerness for information as a hook to deceive and defraud. Then they shifted to fake coronavirus cures or deals on medical products in short supply. You may have noticed that security experts were quick to raise red flags about these tactics. That's because there is a well-established body of cybersecurity knowledge which predicts that these types of crimes will be attempted around any attention-grabbing event.

screenshot of searching for scam textCriminals know this too; they realize that there is a relatively small window of opportunity to leverage a timely hook before everyone hears the hook-specific warnings. So the next play in this particular chapter of the cybercrime playbook is to use deceptive messaging that is not linked to the current crisis, but still taps the desperate hopes and needs that the crisis has generated.

What to do? 

Be wary of any message or email that you receive if it offers you money or other benefits, particularly if you were not expecting them.

If you have any doubts, just use your phone or computer to search for a few words from the message, maybe adding the word scam for good measure.

As you can see from the screenshot on the right, when I did that on my iPhone the search results immediately provided me with enough information to know that this was a fraudulent message, containing a link that I definitely should not click, regardless of how much I wanted the money.

Remember: Think before you click!

Sunday, March 29, 2020

Coronavirus and cybercrime: please say criminals, NOT hackers

Not all criminals wear hoodies.
Not all hackers are criminals.
Photo by Luis Villasmil on Unsplash
This BBC headline is both a sad sign of the times and also a sad reminder of how sloppy the media can be:

"Coronavirus: How hackers are preying on fears of Covid-19"

I bet the title was not chosen by the writer of the article.

The article itself, by Joe Tidy, is good stuff, and I encourage you to read it because everyone needs to be aware that—as he writes in the opening sentence—at this point in time, "Cyber-criminals are targeting individuals as well as industries, including aerospace, transport, manufacturing, hospitality, healthcare and insurance." And they are using the public's fear of coronavirus to advance a criminal agenda: infiltrate systems and compromise them. This is despicable behavior and people who engage in it should be ashamed of themselves.

But it is wrong to call the people who are doing this hackers. These are criminal hackers; or, if space is limited: criminals. To be clear: people hack for criminal purposes are criminals, not hackers. There are many people who hack for non-criminal purposes, some of them very noble and unselfish. For example, right now there are people "hacking" solutions to the shortage of medical equipment and apps to help capture and track data that could be critical to tackling coronavirus data (see "Good use of Hacker" below).

Editors who gloss over this extremely important distinction do the world a disservice. As someone who has spent the better part of three decades trying to explain why the world needs to do more to shut down the criminal abuse of information technology, I can assure you that confusion over the word "hacker" has been a serious distraction if not an outright impediment.

One of the main strategies for assessing the security of a computer network or digital device is to hire someone to try and defeat it, i.e. to hack it. That someone is an ethical hacker, but they are in short supply, due in part—in my opinion—to the stigma that the media has attached to the word hacker. The dynamics of the confusion over hacker are too complex to unravel here, but this article provides a simplified overview of the good/bad hacker landscape, and this one helps explain good hacking, You might also want to check out a session at a hacker conventions (DEF CON III, 1995) in which I explored arguments for and against hacking with some of the earliest practitioners.

A postdigital perspective

Having done several stints as a writer and editor as well as publisher, I realize that it's a pain to have to constantly distinguish between good hackers and bad hackers, white hats and black hats, ethical and criminal—not to mention the hits to your word counts and screen space. On the other hand, think how good it is to educate your readers about this increasingly common aspect of daily life, the constant struggle between criminal hackers and the ethical hackers who work so hard to thwart them.

Furthermore, it is suitably postdigital to just say criminals. To use the word hackers when talking about criminals suggests you can't see how modern life has evolved. Allow me to quote Professor Gary Hall, Director of the Centre for Postdigital Cultures at Coventry University:
the ‘digital’ can no longer be understood as a separate domain of culture. Today digital information processing is present in every aspect of our lives. This includes our global communication, entertainment, education, energy, banking, health, transport, manufacturing, food, and water-supply systems. Attention therefore needs to turn from the digital understood as a separate sphere, and toward the various overlapping processes and infrastructures that shape and organise the digital and that the digital helps to shape and organise in turn.
For good or ill, hacking shapes and organizes the digital. The word for people who commit crimes in our postdigital world is criminal, not hacker. Crimes committed in cyberspace are crimes, not hacking. Bearing these things in mind will help us better understand the fact that we are way behind in our efforts to get a handle on crime (something that I have documented in depth).

Last year I was honored to be part of a much-needed international, vendor-neutral project to address the challenges of cyber-deterrence. The output of the project is freely available here. But even that project started out with a less-than-helpful headline: "To Catch a Hacker." I urged scaling back on that phrase as the project evolved, and I am now trying to be upfront with interviewers and editors: please don't quote me if your headline is going to imply—as the BBC's does—that all hackers are criminals.

Finally, to help out editors who like to learn by example—and to demonstrate that I am not singling out the BBC—here are some bad use cases and some good use cases:

Bad use of hacker:
Good use of Hacker:

Note: If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and fuel more independent, vendor-neutral writing and research like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Monday, February 24, 2020

Crime metrics matter: two charts of the big mess we're in, even if we're not sure how big it is

[Update: Advancing Accurate and Objective Cybercrime Metrics, my article in the Journal of National Security Law & Policy is now available online.]

We are now about 50 years into the information age, so let me ask you: How secure is your personal information? If you're like most adults in America, the answer is probably: "not as secure as it used to be."

Chart linked to original report.
That is what the social scientists at Pew Research Center found last year when they carried out the survey behind the chart on the right (click to access the full report).

As you can see, 70% of folks said that they felt their personal information was less secure than it was five years ago; furthermore, they were more likely to think that way if they were 50 or older, more educated, or in a higher income bracket.

My take on these numbers is that they reflect the relentless increase cybercrime, or what my good friend, the gifted security researcher Cameron Camp, calls cyberbadness: the apparently never-ending litany of technology-enabled scams, frauds, thefts, losses, and disruptions that seem to be victimizing more and more people and organizations. Note that I used the word "seem" intentionally because some observers will point out that public perception of criminal activity is not always in sync with reality. At times that may be true, but before we can determine whether people are over- or under-reacting to cybercrime, we need to ask: what is the true scale and impact of cybercrime? And quite frankly, nobody has a good answer at the moment.

Why? Well, I've said it before, years ago, and again last year: "the importance of metrics to crime deterrence would appear to be both critical and obvious, but despite this there is a persistent cybercrime metrics gap." As far as I am concerned, that is a problem, one that I addressed at some length in a recent law journal article that is currently available online. The following quote may help to put the problem in perspective:

“[u]ntil there are accepted measures and benchmarks for the incidence and damage caused by computer-related crime, it will remain a guess whether we are spending enough resources to investigate or protect against such crimes… In short, metrics matter.”

Those words were spoken 16 years ago by an FBI agent, Edward J. Appel, someone who knew thing or two about metrics (his father, Charles A. Appel, founded the FBI's Technical Laboratory).

Unfortunately, casual use of Google gives the impression that we have an abundance of metrics of about cybercrime, with search results like "300+ Terrifying Cybercrime & Cybersecurity Statistics" and "110 Must-Know Cybersecurity Statistics for 2020." The problem is, the sources for such numbers are often suspect in terms of methodology and/or confirmation bias.

I addressed these issues in the Journal of National Security Law & Policy article mentioned above (Advancing Accurate and Objective Cybercrime Metrics (publication pending, but currently available online). And I had spoken at length about the problem at the 2015 Virus Bulletin security conference (you can find my paper, a video of my talk, and my slides here: Sizing Cybercrime: incidents and accidents, hints and allegations. The sad reality is that, when it comes to timely and objective official statistics about crimes committed in cyberspace, they are in short supply.

Even sadder is that fact that the metrics we do have, such the Internet Crimes Reports issued by the FBI and IC3, make for depressing reading, not to mention depressing charts like the one on the right. This documents the rise in total annual crime losses reported to the Internet Crime Complaint Center or IC3 from 2003 through 2019.

As you can see, the year-on-year increase has become quite acute. Yes, I know the chart is somewhat compressed to fit this page layout, but you would have to spread it quite wide to get rid of the "hockey stick" that is the last five years. I certainly wouldn't bet against it blowing through $4 billion in the next report.

And yes, there are issues with using the IC3 numbers as crime metrics. They are not collected as an exercise in crime metrics, but rather as part of just one avenue of attack against the crimes they represent. However, I have studied each annual report and am satisfied that collectively they provide solid evidence of a real world cybercrime impact trend that looks very much like the line shown here.

My law review article was one of several generated by a range of independent subject matter experts as part of the Third Way Cyber Enforcement Initiative. The initiative was an impressive multi-stage effort to coordinate inter-disciplinary input on efforts to tackle the cybercrime problem. When commissioned papers reached draft stage, authors attended a day-long, mid-summer workshop at New York University School of Law for live peer review. By October, this Third Way initiative had already produced results, including an excellent summary of the metrics issue in The Need for Better Metrics on Cybercrime, from Third Way Policy Advisor, Ishan Mehta.

As papers continue to appear, check the website of the Journal of National Security Law & Policy. For example, right now you can access this important contribution from Amy Jordan and Allison Peters on Countering the Cyber Enforcement Gap: Strengthening Global Capacity on Cybercrime, and this excellent review of the use of criminal charges as a response to nation-state hacking from Tim Maurer.

(Acknowledgement: I am deeply grateful to all who participated in this project, for their input, insight, enthusiasm, and support.)

Monday, January 20, 2020

Happy New Year? Decade? 2020?

Greetings! I am happy you're here, reading this page, because now that I'm no longer writing for We Live Security, this blog is one of the ways I will continue to share what I hope is useful research and analysis. (There's more on the big changes I made in 2019 here.)

I really do hope that you have a happy and safe and satisfying 2020, and a fulfilling decade, but I put those question marks up there in the title because right now I see serious challenges ahead. Frankly, I'm not sure the world is ready, or able, or even willing, to meet them.

But gloomy as that may sound, I do see some bright spots; I mean, the 2020 puns are bound to wear out soon, right? And people will eventually stop saying things like "I can see clearly now that 2020 is here." Which reminds me of the 2015 TEDx event in San Diego that was actually called 20/20 Vision.


I had the honor of speaking at that event. My topic was cybersecurity, cybercrime, and the need for more women and minorities in technology leadership. I framed these remarks (yes there's a pun there if you like), as a choice between two futures.

And of course, that means I have a lot more work ahead of me - explaining what we're doing wrong, why we're doing it wrong, and how critical it is that we change. But just for the record, here's that 2015 talk. Happy 2020?