Thursday, September 24, 2020

A Brief History of Digital Technology Abuse: The First 40 Chapters

Digital technology, it's at the heart of modern life—our communication systems, our methods of travel and transportation, our education, entertainment, medicine, and much, much more—and it has a problem: we keep abusing it.

Graph of internet crime losses

You can think of this as a problem with the technology: it is inherently vulnerable to abuse. Or you can think of this as a problem with people: we keep exploiting those vulnerabilities for selfish ends. 

Either way, it is a big problem, one that keeps getting bigger.

[Insert standard paragraph full of statistics documenting the undeniable rise of technology abuse despite record levels of spending to prevent such abuse — including at least one graph to help visualize this trend and cite source—and remind readers the author has published peer-reviewed papers on this topic.]

Sadly, some people who develop new digital technology products continue to behave as though this problem doesn't exist, or if it does, it's not a big problem, and besides, it will soon be solved so that we can all enjoy the benefits of whatever new technology these people are bringing to market. 

It is for these people—the technophilic "an app can fix that" uber-optimistic, techbro' solutionists—that I have been sketching out a brief history of digital technology abuse. Here's a screenshot of the first 40 chapters:

[I apologize for using a screenshot and not a text-based table that folks can copy and paste (have you tried building a table in Blogger?). However, an easy to grab text list, in roughly chronological order, is included at the end of the article. Also, the table above should be read column-by-column, left to right, top-to-bottom.]

The idea is that each chapter in the list is a technology that has proven vulnerable to abuse. (You can play mix-and-match with these, for example, email is abused to distribute documents containing macro technology that is abused to infect personal computer systems with malicious code that abuses attached digital cameras to capture embarrassing images and threatens to share them through abuse of social media.)

Of course, you may take one look at this table and realize some technologies are missing. Indeed, you may want make your own list, and I think that's a great idea. My list is somewhat random and clearly not definitive. I don't apologize for this because a. I was in a hurry, and b. any attempt at a complete list would be too long for a brief history of digital technology abuse.

The Digital Technology Product Warning

The goal of the 40 chapter list is to challenge people to name one or more digital technologies that are not vulnerable to abuse. (To be clear, I can't think of one.) And if there are none, then I would argue that every new piece of code-based or code-enabled technology must now come with a warning, a warning that has to be included in any discussion, reporting, or promotion of that technology. The warning should read something like this:
This product includes digital technology that is vulnerable to abuse which could cause harm or injury, including but not limited to failure to function correctly, loss of privacy, and reduced security.
I am sure some people will object when governments start proposing that such warnings must appear prominently on existing products, and be included in any reporting of soon-to-be-released products. One likely objection is that: "There's no way you can prove our product will be abused." 

The counter argument is: "there's no way you can prove your product is immune to abuse, but there is a very long history of digital technology products being abused." (Insert handy reference to "A Brief History of Digital Technology Abuse: The First 40 Chapters," S. Cobb.)

Of course, savvy readers will know that many of the digital technology products upon which we have come to rely for the smooth running of our daily lives already include warnings. The problem is that these are not very prominent. Indeed, they are often buried deep within the manual. However, poke around and you will find that any product that runs code comes with a warning like this: 
This product uses software that is provided 'as is' without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability and fitness for a purpose. In no event shall the supplier of this software/product be liable to you or any third parties for any special, punitive, incidental, indirect or consequential damages of any kind, or any damages whatsoever, including, without limitation, those resulting from loss of use, data or profits, whether or not the supplier has been advised of the possibility of such damages, and on any theory of liability, arising out of or in connection with the use of this software.
So, for example, the next time you go to unlock your car with your phone and find—as thousands of Tesla owners did recently—that this feature isn't working, well, too bad. You were warned. You have no legal recourse. That's just the way it is. If you check the Tesla documentation I'm sure you will find language like the paragraph above. (You might also find that the same language applies to the self-driving software—I don't have a Tesla handy or I would look myself, but not while driving.)

The point is, even a brief history of digital technology abuse should be enough to prove that humans have been developing new technologies faster than they have established appropriate ethical norms within the societies into which these technologies are deployed. I believe there is an urgent need for us humans to get serious about monitoring and controlling technology development and deployment in ways that facilitate closing the technology-ethics gap. 

We can think of this gap as: "a mismatch between the value rationality of our ends and the instrumental rationality of our means." That's a quote Phil Torres in Chapter 6 of his excellent book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks (available on Amazon and at Powell's, etc.). 

Another way of putting it comes from Swedish-American physicist Max Tegmark, as quoted by Torres: "A race between the growing power of technology and growing wisdom with which we manage it." There's no doubt in my mind that:
  • the race is on
  • it's a marathon and not a sprint
  • it's probably going to be a multi-generational relay
  • it's the most important race for the human race
  • right now we are not looking like winners
I will be returning to this topic, but for now I'm off to the mental gym to do some circuits. I'll just leave the text of the chapter list for A Brief History of Digital Technology Abuse right below here.

Sunday, September 20, 2020

The "Insider Plus" threat: what the Tesla and Twitter attacks say about the resurgence of an enduring risk

Image of logos suggesting threats are insider or outsider or bothThe "Insider Threat" to information system security is as old as computers, but in recent decades it has received less attention than external threats; yet there is reason to believe that the risk posed by insiders acting on instructions from outsiders may be on the rise; we can usefully refer to this as the "Insider Plus" threat. In my assessment, the number of organizations that are fully aware of, and well-prepared to defend against, this insider plus threat is problematically small. 

That's the short version of this article, which explores the implications of recent security incidents at Twitter and Tesla, finding them indicative of several different-but-related phenomena that suggest the insider plus risk will increase over time. I have also provided some hopefully useful background on insider threats.

Twitter, Tesla, and Three Things True in 2020

Reporting in July on the attack that resulted in the hijacking of Twitter accounts belonging to high-profile individuals and brands, CSO Online described it as: "the perfect example of the impact a malicious or duped insider and poor privileged access monitoring could have on businesses." (Twitter VIP account hack highlights the danger of insider threats).

The next month, Government Tech reported on "an alleged million-dollar payment offered [to an insider] to help trigger a ransomware extortion attack" on the Tesla electric car company. This appeared in Dan Lohrmann's extensive piece on ransomware during Covid 19 where he quotes Katie Nickels, the director of intelligence at security firm Red Canary: 

"It really changes the game for the defenders. Before today I would not have suggested companies include an insider attacker installing ransomware in their threat model. Now everyone has to shift their thinking. If we know about this one case that’s been documented, there might be more."

I'm willing to bet there have been more, if only because this type of attack is a natural outcome of three currently observable phenomena:

  1. Some organizations have become adept at defending against external attackers.
  2. Very hard times, such as a global pandemic, make some employees very susceptible to unethical conduct.
  3. The ethical status of abusing access to information systems remains vague and/or malleable in the minds of many humans.

Consider This Scenario 

You want to extort a company with deep pockets that relies on computer systems that you know you can disable with code in your possession, but the company is doing a good job of preventing external access to those systems; so you decide to get an insider to help you. There are numerous ways of doing this, including but probably not limited to: 

  • A monetary bribe: which might be particularly effective right now, given the current levels of economic hardship and uncertainty.
  • A chance at fame: which may appeal to some individuals for whom abuse of digital technologies is a sport or side gig or a form of protest (all of which can be said to be enabled by ambiguities in the ethics of technology). 
  • A promise not to reveal embarrassing or damaging information: also known as blackmail, potentially facilitated by unauthorized access to devices and accounts belonging to the targeted insider. 
Given the plausibility of this scenario, every company needs to check its approach to data privacy and cybersecurity to make sure it addresses the risk that an external attacker may "partner" with an insider. Clearly, privileged access monitoring needs to be in place and in use, but so does management's awareness that insiders may be more susceptible to breaches of IT security policy and criminal statutes during this pandemic.

Consider This Bibliography

Anyone seeking a deeper understanding of insider threats will benefit from reading insider case studies, such as those aggregated by the CERT Insider Threat Center (Cappelli, Moore and Trzeciak, 2012). The Center has documented hundreds of internal computer crimes that impacted companies in sectors like banking (Randazzo, Keeney, Kowalski, Cappelli, and Moore, 2004), information technology and telecommunications (Kowalski, Cappelli, Moore, 2008), critical infrastructure (Keeney, Kowalski, Cappelli, Moore, Shimeall and Rogers, 2005), and financial services (Cummings, Lewellen, McIntire, Moore, and Trzeciak, 2012). 

While the primary goal of the Center was to discover and disseminate practical methods of mitigating insider threats, the case studies are analysed according to academic standards; for example, methodological limitations, like the inability to generalize findings to all organizations, are duly noted (Cappelli et al, 2012). These studies reveal how a wide range of insiders exploit opportunity to commit crimes, often through a simple betrayal of the trust placed in them as employees or contractors. 

Some insiders may, like Edward Snowden (Poitras, 2014), have far-reaching “super-user” access to the organization’s assets, be they physical or digital; yet CERT has recorded many cases where the crime was committed by an insider with few technical skills and only limited access. These studies document how even limited trust can, if betrayed, enable criminal activity. It may be theorized that such betrayal, by colleagues and co-workers, chosen by management to work at the company, and of whom there is at least a minimal expectation of trustworthiness and shared interests, may have a greater negative psychological impact than the criminal act of an outsider, a person of whom there are no pre-existing positive expectations. 

As I noted in my master's degree essay—from which the preceding three paragraphs were taken—the threat of betrayal by trusted insiders is real, for there can be no doubt that the following is true: "never before have so many insiders had so much access to so much computerized information of such great value." 

Furthermore, never have there been so many ways to monetize—often at relatively low risk— unauthorized access to information systems and the information they process and store. What strikes me as particularly worrying right now is the potential for malefactors to adopt increasingly aggressive meatspace crime tactics in their quest for access to protected systems.

I will be discussing this further and providing links here.