Thursday, April 29, 2021

From cyber-crime metrics to cyber-harm stories: shifting cybersecurity perspectives and cybercrime strategies

Is measuring the amount of cybercrime important? I have argued that it is, and for several different reasons which I have presented in many places; for example, in this article: Advancing Accurate and Objective Cybercrime Metrics in the Journal of National Security Law & Policy

For me, the most pressing reason to pursue accurate and objective cybercrime metrics is the potential of those numbers to persuade governments and world leaders to do more to counter cybercrime (as in: detect, deter, disrupt, prosecute and sanction perpetrators). The persuasion goes like this: 
  1. Here's how big the cybercrime problem is.
  2. Here's how fast it is growing despite current efforts to solve/reduce it.
  3. Can you see how bad things will get if you don't do more to solve/reduce it?
A similar persuasion strategy has long existed in the cybersecurity industry as part of its efforts to make technology safer (while selling more security products and services—a reality that has undermined the value of industry metrics in policy debates). 

The efficacy of this strategy—"look at these numbers, that's how bad the cyberbadness is, it's time you did more to protect us/you"—has been been disappointing to say the least, given the rate at which the cybercrime problem keeps growing. 

Back in 2014, I decided to research this lack of efficacy, exploring risk perception as it relates to crime and technology. I delved into cultural theory of risk, cultural cognition, white male effect, identity protective cognition, and the science of science communication. One thing I learned was that some people are unmoved by statistics and data. 

Relying on stats+facts to convince everyone that there is an urgent problem, one which merits attention and action, is a mistake. For whatever reason, some folk are relatively immune to stats+facts; however, they may be moved by stories.

Ironically, this was a phenomenon that I had already experienced in my early days of promoting security solutions. For some audiences there was nothing more effective than a case study, a story of how some person or organization had become a victim, or how someone had avoided becoming a victim. Even before then, when I was writing my first computer security book, I had made sure that I included stories from which people could learn the value of security policies and practices (The Stephen Cobb Handbook of PC and LAN Security, 1991). 

The problem you run into when you try to use victim stories to pitch security is that, historically, very few people have been willing to share their stories. This may be due to embarrassment or, ironically, for operational reasons. (As a CISSP, I would advise organizations not to share the helpful story of "how Acme firewall is keeping us safe," or the helpful tale of "how our network was penetrated despite Acme firewall.")

All of which leads to some helpful coincidences. If you investigate the amount of harm caused by cybercrime, rather than just count the number of cybercrimes committed, you get more than just persuasive data, you get moving stories. 

Furthermore, you get a fresh perspective on the problem of cybercrime and the challenge of getting more people to take it more seriously, at four different levels:
  1. Personal: understand how I, or my organization, could be victimized and steps I can take to minimize the risk of that happening.
  2. Political: grasp the level of pain and suffering caused by digitally enabled or enhanced crimes, and calculate their impact on society, down to the medical and social care burdens that victimization generated.
  3. Strategic: use this perspective to argue that funding for medical and social care should include cyber-harm reduction initiatives because fewer people scammed  = smaller care burden.
  4. Professional: pursue both qualitative and quantitative research into the harms caused by rampant cyberbadness, from criminal successes to cybersecurity fails.
Moving forward, I want to explore all four levels and share what I find. The process took a step forward this week when I talked myself into delivering a training session about scam avoidance to a community support group. I've done this in the past, but in America. This session will be delivered to a UK audience, specifically people who support carers. 

The Carer Factor

Since we moved back to the UK in 2019, we have found that the importance of social care and the work of unpaid carers is widely-recognized. These carers—who tend to be known as caregivers in America—are people who have become part-time or full-time unpaid carers for relatives and friends. (As you can imagine, part of that care work may include technical support, and that may include several aspects of cybersecurity.)

Local governments and charities in the UK make a concerted effort to support unpaid carers, both practically and emotionally. Let me give you an example: thanks to a charity called Carers Trust,  I am formally registered as the designated carer for my partner Chey, and for my mother. That means, among other things, that if I get hit by a bus and first responders check my wallet, they will find a card that says I care for these two people plus a number to call if I am incapacitated. 

That call triggers several services. Carers Trust will step in to provide care to my carees if I cannot be there for them. The organization already has a comprehensive file on the needs of my carees, their circumstances, and so on. Furthermore if the bus misses me, but I feel like I could really use a break from caring, the carers' support group can cover for me.

I'm sure you can imagine what a huge weight this care group has lifted from my shoulders, and how much peace of mind it has provided to my carees, now they know that there is backup help available. On a less dramatic, but still very important level, the care group provides me a place to meet with other carers and I find this helpful, both psychologically and practically.

My involvement with the care community has led me to consider fresh lines of inquiry into the reduction of cybercrime and technology abuse. Indeed, I can see this care group, and the many others like it around the country, becoming a valuable resource in the quest to reduce the harms caused by scammers and fraudsters.

If you check back here in the latter part of May there should have a link to the training session content. (Like all of my content these days, it is free and suitable for sharing.) In the meantime, here are some links that might be of interest:
  • A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
  • The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
  • The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.
  • The website of Carers Trust in the UK: "a major charity for, with and about carers".

Note: If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.



Thursday, March 18, 2021

As predicted, Internet crime surged in 2020, losses up 20% based on FBI and IC3 reports: analysis and opinion

Losses to individual and business victims of internet crime in 2020 exceeded $4 billion according to the recently published 2020 Internet Crimes Report from the FBI and IC3; this represents a 20% increase over losses reported in 2019. The number of complaints also rose dramatically, up nearly 70%.

IC3/FBI internet crime data graphed by S. Cobb
Throughout 2020, criminologists and cybersecurity experts had expressed growing fears that 2020 would be a big year for internet crime, particularly as it became clear that many criminals were prepared to ruthlessly exploit the COVID-19 pandemic for their own selfish ends.

When the 2019 Internet Crimes Report was published in February of 2020 it documented "$3.5 billion in losses to individual and business victims."

What I said back then, about the loss number that I expected to see in the 2020 report, was this: "I certainly wouldn't bet against it blowing through $4 billion"

(Here's a link to the article where I said that). 

Quite frankly, I'm not the least bit happy that I was right. Just as I take no pleasure in having been right for each of the last 20 years, when my annual response to "what does the year ahead look like for cybersecurity?" has been to say, with depressingly consistent accuracy: it's going to get worse before it gets better. As I see it, a 20% annual increase in losses to internet crime, despite record levels of spending on cybersecurity, is a clear indicator that current strategies for securing our digital world against criminal activity are not working.

A shred of hope?

However, like many cybersecurity professionals, I have always had an optimistic streak, a vein of hope compressed deep beneath the bedrock of my experience. (Periodically, we have to mine this hope to counter the urge to throw up our hands and declare: "We're screwed! Let's just go make music.")

So let me offer a small shred of hope. 

I am honor bound to point out that cybercrime's impact last year may not have been as bad I had come to expect. Yes, at the start of 2020 I predicted that cybercrime would maintain its steep upward trajectory. I said the IC3/FBI loss number for 2020 would pass $4 billion and it did. But then "the Covid effect" kicked in, generating scores of headlines about criminal exploitation of the pandemic in both cyberspace and meatspace. And behind each of those headlines were thousands of victims experiencing a range of distressing psychological impacts and economic loss.

By the end of 2020 I was predicting that the IC3/FBI number could be as high as $4.7 billion (see my December, 2020, article: Cybersecurity had a rough 2020). In that context, the reported 2020 number of $4.2 billion was "better than expected." Indeed, the year-on-year increase from 2019 to 2020 of 20% was not as bad as the 2018-2019 increase of 29%. 

However, when I look at the graph at the top of this article I'm not yet ready to say things are improving. And I'm very aware that every one of the 791,790 complaints of suspected internet crime that the IC3 catalogued in 2020—an increase of more than 300,000 from 2019—signifies a distressing incident that negatively impacted the victim, and often their family and friends as well.

In 2020, the pandemic proved to be a very criminogenic phenomenon. I'm pretty sure it also generated greater public awareness of statistical terms like growth curves, rolling averages, trend lines, dips, and plateaus. Right now I see no reason to think cybercrime will dip or even plateau in 2021. But let's hope I'm wrong and in the months and years to come there is a turnaround in the struggle to reduce to the abuse of digital technologies, hopefully before my vein of optimism is all mined out.

Disclaimer: I acknowledge that there are issues with using the IC3 numbers as crime metrics. For a start, they are not collected as an exercise in crime metrics, but rather as part of one avenue of attack against the crimes they represent, an issue I addressed in this law journal article. However, I have studied each IC3 annual report and am satisfied that collectively they reflect real world trends in cybercrime's impact on victims, as measured by direct monetary lost (the psychological impact of internet crime creates other costs, to victims and society, but so far we have done a woefully poor job of measuring those).

As soon as I get a chance I will dig deeper into the 2020 IC3/FBI report and report back; I'm particularly interested in trends impacting the "60 and over" demographic which @Chey_Cobb and I highlighted in the IEEE piece we wrote about age tech after last year's report

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Friday, March 05, 2021

Secu-ring video doorbells and other 'smart' security cameras: some helpful links

Photo of a doorbell by Yan Ots. Available freely on @unsplash.

Are you thinking of installing a video doorbell or smart security camera? Are you concerned about the security of the one you have already installed? These links should help: 

How to secure your Ring camera and account
https://www.theverge.com/2019/12/19/21030147/how-to-secure-ring-camera-account-amazon-set-up-2fa-password-strength-hack

Ring security camera settings
https://www.wired.co.uk/article/ring-security-camera-settings

Video doorbell security: How to stop your smart doorbell from being hacked
https://www.which.co.uk/reviews/smart-video-doorbells/article/video-doorbell-security-how-to-stop-your-smart-doorbell-from-being-hacked-aCklb4Y4rZnw

How the WYZE camera can be hacked
https://learncctv.com/can-the-wyze-camera-be-hacked/

How to secure your WYZE security camera account
https://www.cnet.com/how-to/wyze-camera-data-leak-how-to-secure-your-account-right-now/

How to protect 'smart' security cameras and baby monitors from cyber attack
https://www.ncsc.gov.uk/guidance/smart-security-cameras-using-them-safely-in-your-home

Yes, your security camera could be hacked: Here's how to stop spying eyes
https://www.cnet.com/how-to/yes-your-security-camera-could-be-hacked-heres-how-to-stop-spying-eyes/

On a related topic, and as a way to understand how hackers look for vulnerabilities in digital devices, check out this article at Hackaday: https://hackaday.com/2019/03/28/reverse-engineering-a-modern-ip-camera/. It links to a cool, four-part reverse engineering exercise by Alex Oporto: https://dalpix.com/reverse-engineering-ip-camera-part-1

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Thursday, January 28, 2021

Data Privacy Day 2021: Selected data privacy reading and viewing, past and present

For this Day Privacy Day—January 28, 2021—I have put together an assortment of items, suggested resources and observations that might prove helpful. 

The first item is time-sensitive: a live streamed virtual privacy day event: Data Privacy in an Era of Global Change. The event begins at Noon, New York time, 5PM London time, and features a wide range of excellent speakers. This is the latest iteration of an annual event organized by the National Cyber Security Alliance that going back at least seven years, each one live streamed.

The 2014 event included me on a panel at Pew Research in D.C., along with Omer Tene of the International Association of Privacy Professionals (IAPP), plus John Gevertz, Global Chief Privacy Officer of ADP, and Erin Egan, CPO of Facebook (which arranged the live streaming). 

In 2015, I was on another Data Privacy Day panel, this one focused on medical data and health privacy. It featured Peter Swire who was heavily involved in the creation of the HIPAA. By request, I told the story of Frankie and Jamie, "A Tale of Medical Fraud" that involved identity theft with serious data privacy implications.

Also on the panel were: Anne Adams, Chief Compliance & Privacy Officer for Emory Healthcare; Pam Dixon Executive Director of the World Privacy Forum, and Hilary M. Wandall, CPO of Merck—the person to whom I was listening very carefully in this still from the recorded video on Vimeo (which is still online but I could not get it to play):

The second item is The Circle, both the 2013 novel by Dave Eggers—my fairly lengthy review of which appears here—and the 2017 movie starring Emily Watson and Tom Hanks, the trailer for which should be playable below.


While many critics didn't like the film (Metascore is only 43), the content was close enough to the book for me to enjoy it (bearing in mind that I'm someone who's "into" data privacy). Also, the film managed to convey some of the privacy nuances central to Eggers' prescient novel. Consider the affirmation often used by the social media company at the heart of the story: "Sharing is caring." This is used to guilt trip users into sharing more and more of their lives with more and more people, because some of those people derive emotional and psychological support from that sharing. 

Depending on where in the world you live, you may be able to catch The Circle on either Amazon Prime or Netflix (although the latter has—ironically, and possibly intentionally so—a reality TV series of the same name, the premise of which is about as depressing as it gets: ""Big Brother" meets "Catfish" on this reality series on which not everything is as it seems").

Note, if you are working in any sort of "need to raise awareness and/or spark discussions of privacy issues" role then films can be very helpful. Back around 2005 or so, Chey organized a week-long "Privacy Film Festival" at Microsoft's headquarters. Four movies were screened at lunchtime on consecutive weekdays and then a Friday panel session brought in some privacy and security heavyweights (including both Don Parker and Ari Schwartz as I recall—movies included Enemy of the State and Minority Report). The overall feedback on the whole endeavor was very positive.

Item number three: the privacy meter. This also relates to the "need to raise awareness and/or spark discussions of privacy issues." I started using it in 2002 when talking to companies about what at that time was, for many of them, an emerging issue/concern/requirement.
 
The idea was to provide a starting point for reflection and conversation. The goal was to help everyone from management to employees to see that there were many different attitudes to personal privacy within the organization. What I did not convey back then—at least not as much as I probably should have—was the extent to which privilege and economic status can influence these attitudes. See the next item for more on that.

Item number Four is a privacy reading list, shamelessly headed by my 2016 white paper on data privacy law. While the paper does not cover developments in data privacy law in the last few years, several people have told me that the historical background it provides is very helpful, particularly when it comes to understanding why Data Privacy Day in America is Data Protection Day in many other countries. And, it does contain about 80 references, including links to all major US privacy legislation up into 2016.

Moving from privacy laws to privacy realities, like the intersection of privacy, poverty, and privilege, here are a number of thought-provoking articles you might want to read: 

Finally, getting back to a point raised earlier in this post, one that comes up every Data Privacy Day, here is my 2018 article "Data Privacy vs. Data Protection: Reflecting on Privacy Day and GDPR."

P.S. If you're on Twitter you might enjoy what I've been tweeting about #DataPrivacyDay

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Tuesday, January 05, 2021

AI's most troubling problem? It's made of chips and code

If we define "AI problem" as an obstacle to maximizing the benefits of Artificial Intelligence, it is clear that there are a number of these, ranging from the technical and practical to the ethical and cultural. As we say goodbye to 2020, I think that we may look back on it in, a few years' time, as the year in which some of the most serious AI problems emerged into the mainstream of public discourse. However, there is one very troubling gap in this growing awareness of AI problems, a seldom discussed problem that I present below.

Image of computer servers, visually distorted

Growing Doubts About AI?

As one data science publication put it, 2020 was: "marked by ethical issues of AI going mainstream, including, but not limited to, gender/race bias, police and military use, face recognition, surveillance, and deep fakes." — The State of AI in 2020.

One of the most widely discussed indicators of problems in AI in 2020 was the “Timnit Gebru incident” (More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru). This seems to be a debacle of Google’s own making, but it surfaced issues of AI bias, AI accountability, erosion of privacy, and environmental impact. 

As we enter 2021, bias seems to be the AI problem that is “enjoying” the widest awareness. A quick Google search for ai bias produces 139 million results, of which more than 300,000 appear as News. However, 2020 also brought growing concerns about attacks on the way AI systems work, and the ways in which AI can be used to commit harm, notably the "Malicious Uses and Abuses of Artificial Intelligence," produced by Trend Micro Research in conjunction with the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol’s European Cybercrime Centre (EC3). 

Thankfully, awareness of AI problems was much in evidence at the "The Global AI Summit," an online "think-in" that I attended last month. The event was organized by Tortoise Media and some frank discussion of AI problems occurred after the presentation of highlights from the heavily researched and data rich Global AI Index. Unfortunately, the AI problem that troubles me the most was not on the agenda (it was also absent from the Trend/UN report). 

AI's Chip and Code Problem

The stark reality, obscured by the hype around AI, is this: all implementations of AI are vulnerable to attacks on the hardware and software that run them. At the heart of every AI beats one or more CPUs running an operating system and applications. As someone who has spent decades studying and dealing with vulnerabilities in, and abuse of, chips and code, this is the AI problem that worries me the most:

AI RUNS ON CHIPS AND CODE, BOTH OF WHICH ARE VULNERABLE TO ABUSE

In the last 10 years we have seen successful attacks on the hardware and software at the heart of mission critical information systems in hundreds of prestigious entities  both commercial and governmental. The roll call of organizations and technologies that have proven vulnerable to abuse includes the CIA, NSA, DHS, NASA, Intel, Cisco, Microsoft, Fireye, Linux, SS7, and AWS. 

Yet despite a constant litany of new chip and code vulnerabilities, and wave after wave of cybercrime and systemic intrusions by nation states—some of which go undetected for months, even years—a constantly growing chorus of AI pundits persists in heralding imminent human reliance on AI systems as though it was an unequivocally good thing. 

Such "AI boosterism" keeps building, seemingly regardless of the large body of compelling evidence that supports this statement: no builder or operator of any computer system, including those that run AI, can guarantee that it will not be abused, misused, impaired, corrupted, or commandeered through unauthorized access or changes to its chips and code.

And this AI problem is even more more serious when you consider it is the one about which meaningful awareness seems to be lowest. Frankly, I've been amazed at how infrequently this underlying vulnerability of AI is publicly mentioned, noted, or addressed, where publicly means: "discoverable by me using Google and asking around in AI circles."

Of course, AI enthusiasts are not alone in assuming that, by the time their favorite technology is fully deployed, it will be magically immune to the chip-and-code vulnerabilities inherent in computing systems. Fans of space exploration are prone to similar assumptions. (Here's a suggestion for any journalists reading this: the next time you interview Elon Musk, ask him what kind of malware protection will be in place when he rides the SpaceX Starship to Mars.)

Boosters of every new technology — pun intended— seem destined to assume that the near future holds easy fixes for whatever downsides skeptics of that technology point out. Mankind has a habit of saying "we can fix that" but not actually fixing it, from the air-poisoning pollution of fossil fuels to ocean-clogging plastic waste. (I bet Mr. Musk sees no insurmountable problems with adding thousands of satellites to the Earth's growing shroud of space clutter.) 

I'm not sure if I'm the first person to say that the path to progress is paved with assumptions, but I'm pretty sure it's true. I would also observe that many new technologies arrive wearing a veil of assumptions. This is evident when people present AI as so virtuous and beneficent that it would be downright churlish and immodest of anyone to question the vulnerability of their enabling technology.

The Ethics of AI Boosterism

One question I kept coming back to in 2020 was this: how does one avert the giddy rush to deploy AI systems for critical missions before they can be adequately protected from abuse? While I am prepared to engage in more detailed discussions about the validity of my concerns, I do worry that these will get bogged down in technicalities of which there is limited understanding among the general public.

However, as 2020 progressed and "the ethics of AI" began to enjoy long-overdue public attention, another way of breaking through the veil of assumptions obscuring AI's inherent technical vulnerability occurred to me. Why not question the ethics of "AI boosterism"? I mean, surely we can all agree that advocating development and adoption of AI without adequately disclosing its limitations raises ethical questions.

Consider this statement: as AI improves, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases. How ethical is it to say that, given what we know about how vulnerable AI systems will be if the hardware and software on which they run is not significantly more secure than what we have available today?

To be ethical, any pitches for AI backing and adoption should come with a qualifier, something like "provided that the current limitations of the enabling technology can be overcome." For example, I would argue that the earlier statement about medical use of AI would not be ethical unless it was worded something like this: as AI improves, and if the current limitations of the enabling technology can be overcome, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases.

Unlikely? Far-fetched? Never going to happen? I am optimistic that the correct answer is no. But I invite doubters to imagine for just a moment how much better things might have gone, how much better we might feel about digital technology today, if previous innovations had come with a clear up-front warning about their potential for abuse.

40 digital technologies open to abuse
A few months ago, to help us all think about this, I wrote "A Brief History of Digital Technology Abuse." The article title refers  to "40 chapters" but these are only chapter headings that match the 40 items in this word cloud. I invite you to check it out.

In a few weeks I will have some statistics to share about the general public's awareness of AI problems. I will be sure to provide a link here.

In the meantime, I would love to hear from anyone about their work, or anyone else's, on the problem of defending systems that run AI against abuse. (Use the Comments or the contact form at the top of the page, or DM @zobb on Twitter.) 

Notes

If you found this article interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Thursday, December 31, 2020

Cybersecurity had a rough 2020, but 50 recent headlines suggest the outlook for 2021 could be even worse

Sadly, my annual outlook for cybersecurity has, for the past 20 years, been this: "things will get worse before they get better." 

In this context, "the outlook for cybersecurity" is the expected performance of efforts to defend information systems from abuse, as measured by the amount of system abuse that occurs despite those efforts. 

If you boil cybersecurity outlook down to a single question it is this: will criminal acts targeting digital systems and the data they process cause more harm next year than they did this year?

On the right you can see just one measure of such harm, a dollar figure for internet crime losses reported to IC3 and the FBI. The losses recorded in this metric hit $3.5B in 2019.*

I predict that for 2020, the IC3/FBI report will show around $4.7B in losses, barring significant changes to the report's methodology. I further predict that the number will reach $6B in 2021.

Of course, I could be wrong, and I sincerely hope that the losses turn out to be lower than my predictions. What I can promise is that I will post the 2020 number as soon as it is published (about 45 days from now, if the Biden-Harris administration sticks to the traditional schedule).

One way of looking at the problem

Regardless of the IC3/FBI numbers for 2020, I think that criminal acts targeting digital systems and the data they process will cause more harm in 2021 than they did this year. And I say that despite 2020 being a quite unusual year, what with all that cybercrime which leveraged the pandemic, and the presidential election in the US, plus the massive Russian SolarWinds breaches. 

The rest of this blog post is just one way of documenting why my outlook is bleak (I am working on a longer article about the history of my "will get worse before it gets better" perspective). What you have here are 50 cybersecurity headlines that I noticed during the last 30 days of 2020. These are not ALL the cybercrime headlines from December, 2020. These they are just a sample, plucked from one of the best cybersecurity "feeds" that I have found: InfoSecSherpa's Newsletter (subscription strongly recommended).

This daily email newsletter is produced by @InfoSecSherpa who pledges to provide: "a daily summary of 10 Information Security news items that aren't necessarily getting a lot of attention." So, here are 50 items I picked out to reflect the range of cyber-criminal activity currently taking place. I'm not saying that you should read them all. I think a quick scan will make my point: 

  1. Fresh Card Skimmer Attacks Multiple E-Commerce Platforms
  2. Massive Cyber Attack Takes Down Major German Newsgroup
  3. Kawasaki Heavy Industries reports data breach as attackers found with year-long network access
  4. Cruise Ships Forced to Cancel Sailings Due to Possible Cyberattack
  5. Vietnam targeted in complex supply chain attack
  6. Serious attack on our democracy': Cyber strike hits Finnish MPs
  7. REvil hackers to leak photos of plastic surgery patients after massive hack
  8. VOIP hardware and software maker Sangoma struck by ransomware attack
  9. Hackers Tapped Microsoft Resellers To Gain Access
  10. Rakuten exposes 1.48 million sets of data to access from outside
  11. Pension Plan Personal Data Breached, Third-Party Blamed
  12. Russian crypto-exchange Livecoin hacked after it lost control of its servers
  13. Major Swedish firms suffer prolonged malware attack
  14. Emotet Returns to Hit 100K Mailboxes Per Day
  15. U.S. Cyber Agency: SolarWinds Attack Hitting Local Governments
  16. Credential phishing attack impersonating USPS targets consumers over the holidays
  17. Japanese Companies Fall Victim To Unprecedented Wave of Cyber Attacks
  18. Louisville PVA office temporarily closes due to a cyber threat
  19. Treasury Dept. email accounts were compromised in hack blamed on Russia
  20. Iranian hackers hit Israel aerospace industries
  21. iPhones vulnerable to hacking tool for months, researchers say | Malware
  22. Two Rubygems Infected With Crypto-Stealing Feature Malware
  23. Ransomware Attackers Using SystemBC Malware With Tor Proxy
  24. Cybercrime: Fake call centre duping foreign nationals busted in Delhi, 54 arrested
  25. House purchases in Hackney fall through following cyber attack against council
  26. Print security is the remote working cyber risk very few saw coming
  27. Poland, Lithuania are targets of cyber disinformation attack
  28. Norwegian cruise liner Hurtigruten sustains cyber attack
  29. Port of Kennewick crippled by cyberattack
  30. Two Indian banks affected by Windows ransomware attacks
  31. Iran suspected after massive cyberattack on Israeli firms revealed
  32. Files expose mass infiltration of UK firms by Chinese Communist Party
  33. Subway customers receive 'malware' emails
  34. KC suburb spent millions on cyber security protections but still got hit by ransomware
  35. Ransomware Attacks Hitting Vulnerable MySQL Servers
  36. Hackers leak data from trucking firm Cardinal Logistics
  37. Adrozek Malware Delivers Fake Ads to 30K Devices a Day
  38. New Malware Arsenal Abusing Cloud Platforms in Middle East Espionage Campaign
  39. Springfield Public Schools servers back to normal after October cyberattack that put abrupt pause to remote learning
  40. Ransomware gangs are now cold-calling victims if they restore from backups without paying
  41. Middle East facing 'cyber pandemic' as Covid exposes security vulnerabilities, cyber chief says
  42. Vancouver Metro Disrupted by Egregor Ransomware
  43. 113,000 Alaskan voter IDs exposed in data breach
  44. Data of 243 million Brazilians exposed online via website source code
  45. Cyberattacks Discovered on Vaccine Distribution Operations
  46. Brazilian aerospace firm Embraer hit by cyberattack
  47. Malware may trick biologists into generating dangerous toxins in their labs
  48. Spoofed FBI Internet Domains Pose Cyber and Disinformation Risks
  49. Cyber attacks against vaccine makers rise
  50. MacOS Users Targeted By OceanLotus Backdoor

These headlines paint a picture of rampant criminal activity abusing all manner of digital technology in all regions of the world, across all sectors of human endeavor, including education, research, medicine, healthcare, pharmaceuticals, heavy industry, light industry, commercial shipping, recreational shipping, retail, banking, software, hardware, the media, local government, state government, national government. 

These headlines also document the main reason that I think the harm caused by such activity in 2021 will be even greater than in 2020: whatever deterrents there are to people continuing to engage in this type of activity, they are clearly not working. And in 2021 there will be more people than ever with both the motive and means to engage in cybercrime, and more opportunities than ever to commit cybercrime.

  • Motive increase: widespread pandemic-related economic hardship
  • Means increase: constantly improving cybercrime skills, increasingly accessible (e.g. crime-as-a-service)
  • Opportunities increase: more devices and data, in more locations, performing increasingly valuable functions

As 2021 rolls on I will continue to document the scale of the cybersecurity challenge as I see it. For now, let me extend a massive THANK YOU to all the dedicated and righteous souls who labored so hard in 2020 to fend off the bad actors.

Is there any room for optimism in 2021? Maybe, if the Biden Harris administration is allowed to get on with the job of instigating major improvements in globally coordinated cybercrime deterrence. (And to be clear, I do sincerely hope that six months from now reality will show that my current outlook was overly pessimistic.)

In any event, here's to "cyber" becoming way less crimey in 2021. Happy New Year!

Notes

If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

* While IC3 is the source of the numbers in the graph, IC3 has not—to my knowledge—published them in a graph, in other words, I built the graph from their numbers. And I know that the IC3 numbers are by no means perfect crime metrics; they are based on data that is accumulated as a by-product of one avenue of attack against the crimes they represent. However, I have studied each of the annual report and I am satisfied that collectively they provide solid evidence of a real world cybercrime impact trend that looks very much like the line shown in the graph. For more on issues with cybercrime measurement, see my article in the Journal of National Security Law & PolicyAdvancing Accurate and Objective Cybercrime Metrics.

Thursday, November 05, 2020

Universal Recipe for Disaster: Works in Cyberspace as well as Meatspace (a plea to heed experts)

Image says Recipe for disaster that works in both cyberspace and meatspace: rapid embrace of global connectivity and complex interdependence, at scale and absent universally agreed enforceable norms of behavior.

Getting people to heed your warnings is one of the toughest aspects of being an expert, whether your specialty is epidemiology or criminology, virology or malicious code, biology or botnets. How do you get people to pay attention to a problem that seems very urgent to you, but not urgent enough to others? One approach is to just keep trying. 

One of my recent efforts was to describe "The COVID Effect." Another effort was "The Malware Factor." Today, I give you: Recipe for Disaster.

This Recipe for Disaster works in both cyberspace and meatspace. You simply combine these ingredients: rapid embrace of global connectivity and complex interdependence, at scale, absent universally agreed enforceable norms of behavior.

In other words, you create a situation where everything and everybody is not only connected to every other thing and person, but also heavily dependent upon those things and people and connections. Obviously this creates some level of risk that things could go wrong, but the trick to maximizing the potential for disaster is to do all this without everyone involved first committing to abide by an agreed set of rules as to what is permissible, or figuring out how you can and will censure anyone who breaks the rules. 

What you get from this recipe is a situation in which every kind of human endeavor is at serious risk of failing, badly, and with potentially dire consequences. 

A meatspace example would be a global pandemic caused by a deadly biological virus. A cyberspace example would be a digital infrastructure that enables a crisis like a biological pandemic to be abused for selfish ends by criminals wielding malicious code, potentially hindering efforts to deal with the crisis.

Of course, it is now clear that many experts in many fields were right in many ways. As has happened far too often in human history, we are finding out far too late that, like the song says: "What they've been saying all these years is true"* Had experts been heeded in the past, we could have avoided the deadly mess we're in today. 

I can already hear some people saying "Okay, so we should have listened back then, but is there anything you can tell us now that will help us get out of this mess?" Well, as it happens, there is. For a start, I can tell you that increasing the number of people who recognize the mess for what it is will be critical for getting out of it. 

And that's why I will keep trying to improve the effectiveness of my efforts to get people to pay attention.

Please feel free to share the recipe card at the top of the page, or make your own version.

Thanks.

Notes: 

If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

*The song being quoted is Bonnie Dobson's 1962 classic "Morning Dew," popularised in the late sixties by the late Tim Rose whose version is used to great effect by Japanese director Mori Masaki is this anti-war video, which some readers might find upsetting.

Saturday, October 31, 2020

Thanks for reading and heeding. Please #BeCyberSmart! (Cybersecurity Awareness Month, Day 31)

Vote for those committed to doing more about cybersecurity than has been done so far
Vote for those committed to doing
a lot more about cybersecurity
than has been done so far
This is blog post 31 of the 31 posts that I pledged to write in October, 2020, for Cybersecurity Awareness Month, an international effort to help people improve the security of their devices and protect the privacy of their data.

There is a lot more that I wanted to say, and I will get round to saying it in the coming weeks. However, for the moment, there is just time for some final cybersecurity awareness thoughts. 

We should all heed the advice that has been dished up during the month, from locking down our logins and limiting access to all of our connected digital devices, to being careful how and where we reveal sensitive personal information. 

But the world now faces unprecedented levels of criminal behavior in cyberspace, and in my opinion a lot more of the heavy lifting in cybersecurity must be done by governments. Firstly, by taking seriously the need to achieve global consensus that abuse of digital technology is wrong, morally reprehensible, and will be prosecuted. Secondly, by funding efforts to enforce that consensus at levels many times greater than the paltry sums that have been allocated so far. 

So I will close the month by repeating something that I said back on Day 22:

Whenever we vote to elect representatives, we can vote for those most likely to take all this as seriously as it needs to be taken.

Take care, stay safe, and #BeCyberSmart


Author's Note
If you found this article interesting and/or helpful, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.


Friday, October 30, 2020

Cybersecurity needs more women, now and in the future (Cybersecurity Awareness Month, Day 30)

A woman with a laptop next to a server, making the point that IT needs more women. Cybersecurity needs more women. Shoutout to Christina @ wocintechchat.com for the image on UnSplashsocial or copy the text below to attribute.

Hopefully, you have seen many images like the one above during Cybersecurity Awareness Month 2020, which is now drawing to a close. This messaging emphasizes our individual and collective responsibility for taking whatever steps we can to protect digital devices and data from being abused for selfish purposes. To me, this particular image is a reminder that cybersecurity is not only a shared obligation, but also a field of endeavor that offers a lot of job opportunities for women. And that is the subject of today's blog post. 

If you have been reading along on this blog this month you will know that there is post for each day of the month. I hope you have found these helpful and, if so, that you will share them with friends and colleagues through the coming months and into next year. You don't need to read many of these posts to realize that, while I fully support raising awareness of cybersecurity, I also think a lot more than awareness needs to be raised if humans are ever going to get ahead of the cybersecurity problem. One of the things that needs raising is the percentage of women working in technology.

Today we look at the need for more women in technology generally, and in cybersecurity specifically. But before I go any further with this, I need to give a shoutout to Christina at wocintechchat.com for the great photo that makes up the right half of the image at the top of this article. Women of Color in Tech are creators of the WOCinTech stock photo collection, full of great images that are easy to find on UnSplash.

More women in cybersecurity

As I outlined in the article for October 28, there is a huge cybersecurity skills gap, despite the fact that the pay for some cybersecurity roles can be very good.* We're talking half a million open positions in North America this year, and most countries are faced with large shortfalls in qualified applicants for cybersecurity roles. 

Note that these are funded jobs, waiting for the right applicants; and there is no reason that all those applicants need to be men. Indeed, I would argue that the cybersecurity workforce would benefit from becoming far more gender diverse, and just more diverse in general. When a field of endeavor embraces greater diversity that means a larger pool of talent from which to recruit, plus the potential to benefit from a wider range of perspectives.

Clearly, there are multiple ways in which it makes sense to encourage women to consider a job in cybersecurity, starting with the number of openings and the levels of pay available. Industry organizations—like CompTIA, (ISC)2, and ISSA—recognize this and have done a lot to encourage recruitment of women and minorities into tech in general, and cybersecurity specifically. Here's just a sample of web pages and articles that have more information about this: 

Of course, getting into the field may require some knowledge and training that you don't have yet, but these can be acquired, often through self-paced learning, on the job or in your own time, combined with security certifications. There are also community college course and apprenticeship programs. In other words, getting into a career in cybersecurity and progressing to the point where you're earning a six figure salary does not require a university degree (there are still some employers who don't believe this, but they are wrong, and there are a lot of people, like me, working at convincing them of this).

Cybersecurity can be a great fit for women returning to the workforce, or entering it "late" (as defined by social convention). In my experience, women can acquire the necessary knowledge and training for cybersecurity work just as fast as men, if not faster. In yesterday's article I looked at reasons why some people might be more aware of technology risks than others, and I believe that lot of those more aware people are female.

Here are a couple of examples that show women being particularly adept in one particular aspect of cybersecurity: raising awareness of how easily our digital devices and data can be compromised. To be clear, both women are making a good living advising organizations on how to avoid becoming victims of the kind of "vishing" attacks that they so effectively demonstrate.  

This second example offers more detail, some colorful language, and live video of a fairly serious theft of information, plus airline points. It also works as a great cybersecurity awareness video. Use it when you need to show someone how all that online authentication stuff we talked about on days 19, 20, and 21, can be bypassed if you shift communications to the phone and the target is not vishing-aware). 

Of course, the cybersecurity realm is much, much wider than this, and women are making valuable contributions across the board. From the very human side, seen in these videos, to the most cerebral, like Artificial Intelligence, a topic I will get back to in tomorrow's blog post). 

One thing I find particularly encouraging about the state of play for women entering cybersecurity today, is the amount of encouragement that is on offer, not just upon entering the field, but throughout career development. One of my favorite encouragers is Keirsten Brager. Consider the approach she took when investigating the recurring career question of "what should I be paid?" (When I heard Keirsten speaking at The Diana Initiative as few years ago, I learned several career strategies that were new to me, and cybersecurity has been my career for more than three decades.)

Women on cybersecurity

Getting more women to enter the field of cybersecurity is only part of what needs to happen. I would like to see, and the world would benefit from, more non-male influencers in the field. For example, several of my cybersecurity awareness blog posts this month recommended websites and newsletters that are good for keeping up with the latest security news, incidents, breaches, vulnerabilities, research findings, etc. 

You might have noticed that these cybersecurity resources tend to be helmed by men, guys who have developed a reputation for providing, useful and un-gated information about, and analysis of, cybersecurity trends and issues. I wanted to include more non-male sources in my posts, but I encountered a very interesting phenomenon: women charging for their take on cybersecurity. This makes sense given the way that the field has evolved; guys who rose to prominence in the field early on have developed followings that can be monetized with ads and paid speaking engagements, and so on. 

But what if you have achieved expertise and a perspective worth sharing, but no prominence (circumstances with which many women may be familiar)? Why not build the following your work merits while also monetizing it: pay as you grow as it were. That is what some women in cybersecurity are now doing, charging for their cybersecurity content on a pay-as-you-go basis. Here are two of the paid sources that I have signed up for: Infosec Sherpa and Cybersecurity Roundup

If you know of others, please ping me on Twitter and I will check them out. In the meantime, here is a very helpful list of top cybersecurity and website blogs to follow, curated by a woman. And here is an impressive list of 50 Women In Cybersecurity Associations And Groups To Follow. Also check out Lisa Forte's Rebooting channel on YouTube.

#BeCyberSmart

* When I say there is a huge cybersecurity skills gap "despite the fact that the pay for some cybersecurity roles can be very good" I mean yes, you can earn good money, but not all the jobs pay well. Furthermore, very sadly and all too predictably, the sector currently pays women 21% less than men according to a recent study. Clearly, this is wrong and needs to change. 

Thursday, October 29, 2020

Cybersecurity awareness: Why some people get it, more than others (Cybersecurity Awareness Month, Day 29)


In 2017, I wrote: “the digital technologies that enable much of what we think of as modern life have introduced new risks into the world and amplified some old ones. Attitudes towards risks arising from our use of both digital and non-digital technologies vary considerably, creating challenges for people who seek to manage risk.” 

This is still true today, the 29th day of Cybersecurity Awareness Month, 2020; and, as the month draws to a close, I think it is helpful to reflect on how we feel about "cyber" risks, those created by our use of connected devices and the rest of the digital infrastructure that supports so many facets of life in the 21st century. You may have found that not everyone seems to be as concerned about some risks as you are.

Conversely, you might not be as worried about some things as some of your friends are. For some reason this makes me think of a Chief Information Security Officer cycling to work: she's more aware of, and concerned about, the risks posed by a new operating system vulnerability than most people, but she's less concerned than her friends and family about the risks of cycling to work. 

The reasons for differences in risk perception are many and complex, and there's not enough room in this article to address them all in a fully-documented fashion. What I do have room for is a short account of my considered opinions on this, followed by some sources at the end. The underlying theme of what I have to say is this: the failure of some people to heed expert advice, particularly experts who are warning that something is a problem and poses risks that need to be taken more seriously. 

The Way I/We/You/They See Certain Risks

Consider a survey question that offers the following choices for your answer: Low risk, Between low and moderate risk, Moderate risk. Between moderate and high risk, High risk. Suppose the question is this: How much risk do you believe global warming poses to human health, safety, or prosperity? What is your answer?

Over the last decade or so, numerous surveys have asked that question and the most frequent response is High risk. Almost all climate scientists agree that High risk is the "correct" answer, based on the science. But not everyone agrees, and that is clearly hampering efforts to slow down global warning. 

So guess what what happens when you analyze the survey responses by gender you find that men are more likely to rate this risk Low. In fact, whenever you ask people to rate risks pertaining to a bunch of different technologies, you tend to find men see less risk than women. Furthermore, white males tend to see less risk than white females, non-white males, and non-white women.

And this is not a new phenomenon. There is a long history of failure to heed the warnings sounded by experts on a wide range of issues. Consider the 1994 survey results graphed on the right. The grey line with the round data points is white males who saw less risk than everyone else in nuclear waste, chemical pollution, motor vehicle accidents, outdoor air quality, nuclear power plants and medical X-rays.

To be a bit more precise, the implication is that, on aggregate, white men in America tend to under-estimate technology risks, relative to the mean. And if you think, like I do, that the technologies we have been talking about so far present serious risks to human health, safety, or prosperity, then those men are wrong. What is more, their opinions act as a brake on efforts to address the risks that others are concerned about. And not only are they wrong, history has shown it is hard to persuade them of this, and of the need to raise awareness of these risks. All of which could have serious implications for cybersecurity awareness if it turns out that this pattern of findings extends to cyber risks.

Guess what? The pattern does extend to the digital realm, as you can see from this chart based on research I did a few years ago, working with my good friend Lysa Myers who was on my research team at ESET at the time, with some assistance from Dan Kahan of the Cultural Cognition Project at Yale Law School.


See that White Male line undercutting the others across a wide range of risk categories? The yellow highlighting picks out the “digital risks,” and it shows that white males tend to see less risk from digital technology than the other groups, although the gap is smaller than with some other risks (and there is one notable exception: government data monitoring seems to trouble non-white males even less than white males—there could be several explanations for this, but that is a subject for a different blog post).

"Not All White Men"

Of course, the story here is not as simple as it appears from these graphs. If you watched the TEDx talk on Day 8 of this month's cybersecurity awareness blog posts you will know that, the first time I got excited about this White Male Effect in technology risk perception, my wife point out that I am a white male; and I don't—in her professional opinion—under-estimate risk. And in fact, research shows that significantly less than half of white males are what I would call the "problem" here: refusing to accept expert opinion as to how serious the risks of technology are to human health, safety, and prosperity.

One of the pioneers in risk perception research, Dan Kahan, collaborated in a 2007 study that found a certain type of white male was "so extremely skeptical of risks involving, say, the environment ... that they create the appearance of a sample-wide "white male" effect." 

As Kahan puts it, "that effect 'disappears' once the extreme skepticism of these individuals (less than 1/6 of the white [male] population) is taken into account." (see Kahan's discussion here). This makes a lot of sense when you look at cybersecurity. I think we can safely assume that most cybersecurity professionals perceive the risks from digital technology abuse to be high rather than low. And we know that for decades the cybersecurity profession has been dominated by white males. So what distinguishes them from the "certain type of white male" to which Kahan refers?

The answer lies in something called the Cultural Theory of Risk, and in the language of that theory, the white men in question, the guys drastically underestimating technology risks, are white hierarchical and individualistic males. According to this theory, "structures of social organization endow individuals with perceptions that reinforce those structures in competition against alternative ones" (Wikipedia). A hierarchical individualistic is inclined to agree with statements like: it's not the government's business to try to protect people from themselves, and this whole "everyone is equal" thing has gone to far. 

This blog post does not have room for a discussion of the Cultural Theory, but the diagram on the right helps put the terms hierarchical and individualistic into context. To grossly over-simplify, the folks who see as much risk as I do in technology tend to be in the lower right: egalitarian and community-minded (we're all equal and in this together). A lot of women tend to be in that quadrant. 

For much more on this, you can read about the ground-breaking digital risk research that Lysa and I did on this theory in the context of digital risks in this two-part report: Adventures in cybersecurity research: Risk, Cultural Theory, and the White Male Effectpart one, and part two. (Kudos to ESET for supporting this work.) There is also a summary here on Medium

Lysa and I gave a talk about this work at the (ICS)2 Security Congress in 2017, describing how the failure to listen to experts, rooted in these differences in risk perception, impacts cybersecurity. The main points are as follows:

  • The security of digital systems (cybersecurity) is undermined by vulnerabilities in products and systems.
  • Failure to heed experts is a major source of vulnerability.
  • Failure to heed experts is a known problem in technology.
  • The Cultural Theory of risk perception helps explain this problem.
  • Cultural Theory exposes the tendency of some males to underestimate risk (White Male Effect or WME).
  • Our research assessed the public’s perceptions of a range of technology risks (digital and non-digital).
  • The findings provide the first ever assessment of WME in the digital or cyber-realm.
  • Additional findings indicate that cyber-related risks are now firmly embedded in public consciousness.
  • Practical benefits from the research include pointers to improved risk communication strategies and a novel take on the need for greater diversity in technology leadership roles.

We suggested several ways in which our findings, and those of other experts researching risk perception, might help improve risk communication. Here is the relevant slide from the talk.  


If you want to explore this line of thinking further, I recommend reading about "identity protective cognition," a form of motivated reasoning that, according to Kahan, describes the tendency of people to fit their perceptions of risk (and related facts) to ones that reflect and reinforce their connection to important affinity groups, membership in which confers psychic, emotional, and material benefits. 

#BeCyberSmart