Thursday, April 29, 2021

From cyber-crime metrics to cyber-harm stories: shifting cybersecurity perspectives and cybercrime strategies

Is measuring the amount of cybercrime important? I have argued that it is, and for several different reasons which I have presented in many places; for example, in this article: Advancing Accurate and Objective Cybercrime Metrics in the Journal of National Security Law & Policy

For me, the most pressing reason to pursue accurate and objective cybercrime metrics is the potential of those numbers to persuade governments and world leaders to do more to counter cybercrime (as in: detect, deter, disrupt, prosecute and sanction perpetrators). The persuasion goes like this: 
  1. Here's how big the cybercrime problem is.
  2. Here's how fast it is growing despite current efforts to solve/reduce it.
  3. Can you see how bad things will get if you don't do more to solve/reduce it?
A similar persuasion strategy has long existed in the cybersecurity industry as part of its efforts to make technology safer (while selling more security products and services—a reality that has undermined the value of industry metrics in policy debates). 

The efficacy of this strategy—"look at these numbers, that's how bad the cyberbadness is, it's time you did more to protect us/you"—has been been disappointing to say the least, given the rate at which the cybercrime problem keeps growing. 

Back in 2014, I decided to research this lack of efficacy, exploring risk perception as it relates to crime and technology. I delved into cultural theory of risk, cultural cognition, white male effect, identity protective cognition, and the science of science communication. One thing I learned was that some people are unmoved by statistics and data. 

Relying on stats+facts to convince everyone that there is an urgent problem, one which merits attention and action, is a mistake. For whatever reason, some folk are relatively immune to stats+facts; however, they may be moved by stories.

Ironically, this was a phenomenon that I had already experienced in my early days of promoting security solutions. For some audiences there was nothing more effective than a case study, a story of how some person or organization had become a victim, or how someone had avoided becoming a victim. Even before then, when I was writing my first computer security book, I had made sure that I included stories from which people could learn the value of security policies and practices (The Stephen Cobb Handbook of PC and LAN Security, 1991). 

The problem you run into when you try to use victim stories to pitch security is that, historically, very few people have been willing to share their stories. This may be due to embarrassment or, ironically, for operational reasons. (As a CISSP, I would advise organizations not to share the helpful story of "how Acme firewall is keeping us safe," or the helpful tale of "how our network was penetrated despite Acme firewall.")

All of which leads to some helpful coincidences. If you investigate the amount of harm caused by cybercrime, rather than just count the number of cybercrimes committed, you get more than just persuasive data, you get moving stories. 

Furthermore, you get a fresh perspective on the problem of cybercrime and the challenge of getting more people to take it more seriously, at four different levels:
  1. Personal: understand how I, or my organization, could be victimized and steps I can take to minimize the risk of that happening.
  2. Political: grasp the level of pain and suffering caused by digitally enabled or enhanced crimes, and calculate their impact on society, down to the medical and social care burdens that victimization generated.
  3. Strategic: use this perspective to argue that funding for medical and social care should include cyber-harm reduction initiatives because fewer people scammed  = smaller care burden.
  4. Professional: pursue both qualitative and quantitative research into the harms caused by rampant cyberbadness, from criminal successes to cybersecurity fails.
Moving forward, I want to explore all four levels and share what I find. The process took a step forward this week when I talked myself into delivering a training session about scam avoidance to a community support group. I've done this in the past, but in America. This session will be delivered to a UK audience, specifically people who support carers. 

The Care Factor

Since we moved back to the UK in 2019, we have found that the importance of social care and the valuable role of the carer is widely-recognized; and not just people who are employed as carers, or volunteer as carers, but those who have become part-time or full-time unpaid carers due to personal and family circumstances. 

For example, I am formally registered as the designated carer for both Chey and my mother. If I get hit by a bus and first responders check my wallet they will find a card that says I care for these two people and a number to call; but that is just the beginning. That number is for the care group to which belong, and its members will step in to provide care to my carees if I am incapacitated. They have a comprehensive file on what my carees need, their circumstances, and so on. Furthermore if the bus misses me, but I feel like I could really use a break from caring, the carers' support group will cover for me.

I'm sure you that can imagine what a huge weight this care group—which is funded by both the government and donations of time and money—has lifted from my shoulders, and how much peace of mind it has already delivered to my carees, even though I have not yet had occasion to call upon the group for any help yet. 

All of which adds to my ability to pursue fresh lines of inquiry into the reduction of cybercrime and technology abuse. Indeed, I can see this care group, and the many others like it around the country, becoming a valuable resource in the quest to reduce the harms caused by scammers and fraudsters.

If you check back here in the latter part of May there should have a link to the training session content. (Like all of my content these days, it is free and suitable for sharing.) In the meantime, here are some links that might be of interest:
  • A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
  • The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
  • The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.
  • The website of Carers Trust in the UK: "a major charity for, with and about carers".

Note: If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.



Thursday, March 18, 2021

As predicted, Internet crime surged in 2020, losses up 20% based on FBI and IC3 reports: analysis and opinion

Losses to individual and business victims of internet crime in 2020 exceeded $4 billion according to the recently published 2020 Internet Crimes Report from the FBI and IC3; this represents a 20% increase over losses reported in 2019. The number of complaints also rose dramatically, up nearly 70%.

IC3/FBI internet crime data graphed by S. Cobb
Throughout 2020, criminologists and cybersecurity experts had expressed growing fears that 2020 would be a big year for internet crime, particularly as it became clear that many criminals were prepared to ruthlessly exploit the COVID-19 pandemic for their own selfish ends.

When the 2019 Internet Crimes Report was published in February of 2020 it documented "$3.5 billion in losses to individual and business victims."

What I said back then, about the loss number that I expected to see in the 2020 report, was this: "I certainly wouldn't bet against it blowing through $4 billion"

(Here's a link to the article where I said that). 

Quite frankly, I'm not the least bit happy that I was right. Just as I take no pleasure in having been right for each of the last 20 years, when my annual response to "what does the year ahead look like for cybersecurity?" has been to say, with depressingly consistent accuracy: it's going to get worse before it gets better. As I see it, a 20% annual increase in losses to internet crime, despite record levels of spending on cybersecurity, is a clear indicator that current strategies for securing our digital world against criminal activity are not working.

A shred of hope?

However, like many cybersecurity professionals, I have always had an optimistic streak, a vein of hope compressed deep beneath the bedrock of my experience. (Periodically, we have to mine this hope to counter the urge to throw up our hands and declare: "We're screwed! Let's just go make music.")

So let me offer a small shred of hope. 

I am honor bound to point out that cybercrime's impact last year may not have been as bad I had come to expect. Yes, at the start of 2020 I predicted that cybercrime would maintain its steep upward trajectory. I said the IC3/FBI loss number for 2020 would pass $4 billion and it did. But then "the Covid effect" kicked in, generating scores of headlines about criminal exploitation of the pandemic in both cyberspace and meatspace. And behind each of those headlines were thousands of victims experiencing a range of distressing psychological impacts and economic loss.

By the end of 2020 I was predicting that the IC3/FBI number could be as high as $4.7 billion (see my December, 2020, article: Cybersecurity had a rough 2020). In that context, the reported 2020 number of $4.2 billion was "better than expected." Indeed, the year-on-year increase from 2019 to 2020 of 20% was not as bad as the 2018-2019 increase of 29%. 

However, when I look at the graph at the top of this article I'm not yet ready to say things are improving. And I'm very aware that every one of the 791,790 complaints of suspected internet crime that the IC3 catalogued in 2020—an increase of more than 300,000 from 2019—signifies a distressing incident that negatively impacted the victim, and often their family and friends as well.

In 2020, the pandemic proved to be a very criminogenic phenomenon. I'm pretty sure it also generated greater public awareness of statistical terms like growth curves, rolling averages, trend lines, dips, and plateaus. Right now I see no reason to think cybercrime will dip or even plateau in 2021. But let's hope I'm wrong and in the months and years to come there is a turnaround in the struggle to reduce to the abuse of digital technologies, hopefully before my vein of optimism is all mined out.

Disclaimer: I acknowledge that there are issues with using the IC3 numbers as crime metrics. For a start, they are not collected as an exercise in crime metrics, but rather as part of one avenue of attack against the crimes they represent, an issue I addressed in this law journal article. However, I have studied each IC3 annual report and am satisfied that collectively they reflect real world trends in cybercrime's impact on victims, as measured by direct monetary lost (the psychological impact of internet crime creates other costs, to victims and society, but so far we have done a woefully poor job of measuring those).

As soon as I get a chance I will dig deeper into the 2020 IC3/FBI report and report back; I'm particularly interested in trends impacting the "60 and over" demographic which @Chey_Cobb and I highlighted in the IEEE piece we wrote about age tech after last year's report

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Friday, March 05, 2021

Secu-ring video doorbells and other 'smart' security cameras: some helpful links

Photo of a doorbell by Yan Ots. Available freely on @unsplash.

Are you thinking of installing a video doorbell or smart security camera? Are you concerned about the security of the one you have already installed? These links should help: 

How to secure your Ring camera and account
https://www.theverge.com/2019/12/19/21030147/how-to-secure-ring-camera-account-amazon-set-up-2fa-password-strength-hack

Ring security camera settings
https://www.wired.co.uk/article/ring-security-camera-settings

Video doorbell security: How to stop your smart doorbell from being hacked
https://www.which.co.uk/reviews/smart-video-doorbells/article/video-doorbell-security-how-to-stop-your-smart-doorbell-from-being-hacked-aCklb4Y4rZnw

How the WYZE camera can be hacked
https://learncctv.com/can-the-wyze-camera-be-hacked/

How to secure your WYZE security camera account
https://www.cnet.com/how-to/wyze-camera-data-leak-how-to-secure-your-account-right-now/

How to protect 'smart' security cameras and baby monitors from cyber attack
https://www.ncsc.gov.uk/guidance/smart-security-cameras-using-them-safely-in-your-home

Yes, your security camera could be hacked: Here's how to stop spying eyes
https://www.cnet.com/how-to/yes-your-security-camera-could-be-hacked-heres-how-to-stop-spying-eyes/

On a related topic, and as a way to understand how hackers look for vulnerabilities in digital devices, check out this article at Hackaday: https://hackaday.com/2019/03/28/reverse-engineering-a-modern-ip-camera/. It links to a cool, four-part reverse engineering exercise by Alex Oporto: https://dalpix.com/reverse-engineering-ip-camera-part-1

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Thursday, January 28, 2021

Data Privacy Day 2021: Selected data privacy reading and viewing, past and present

For this Day Privacy Day—January 28, 2021—I have put together an assortment of items, suggested resources and observations that might prove helpful. 

The first item is time-sensitive: a live streamed virtual privacy day event: Data Privacy in an Era of Global Change. The event begins at Noon, New York time, 5PM London time, and features a wide range of excellent speakers. This is the latest iteration of an annual event organized by the National Cyber Security Alliance that going back at least seven years, each one live streamed.

The 2014 event included me on a panel at Pew Research in D.C., along with Omer Tene of the International Association of Privacy Professionals (IAPP), plus John Gevertz, Global Chief Privacy Officer of ADP, and Erin Egan, CPO of Facebook (which arranged the live streaming). 

In 2015, I was on another Data Privacy Day panel, this one focused on medical data and health privacy. It featured Peter Swire who was heavily involved in the creation of the HIPAA. By request, I told the story of Frankie and Jamie, "A Tale of Medical Fraud" that involved identity theft with serious data privacy implications.

Also on the panel were: Anne Adams, Chief Compliance & Privacy Officer for Emory Healthcare; Pam Dixon Executive Director of the World Privacy Forum, and Hilary M. Wandall, CPO of Merck—the person to whom I was listening very carefully in this still from the recorded video on Vimeo (which is still online but I could not get it to play):

The second item is The Circle, both the 2013 novel by Dave Eggers—my fairly lengthy review of which appears here—and the 2017 movie starring Emily Watson and Tom Hanks, the trailer for which should be playable below.


While many critics didn't like the film (Metascore is only 43), the content was close enough to the book for me to enjoy it (bearing in mind that I'm someone who's "into" data privacy). Also, the film managed to convey some of the privacy nuances central to Eggers' prescient novel. Consider the affirmation often used by the social media company at the heart of the story: "Sharing is caring." This is used to guilt trip users into sharing more and more of their lives with more and more people, because some of those people derive emotional and psychological support from that sharing. 

Depending on where in the world you live, you may be able to catch The Circle on either Amazon Prime or Netflix (although the latter has—ironically, and possibly intentionally so—a reality TV series of the same name, the premise of which is about as depressing as it gets: ""Big Brother" meets "Catfish" on this reality series on which not everything is as it seems").

Note, if you are working in any sort of "need to raise awareness and/or spark discussions of privacy issues" role then films can be very helpful. Back around 2005 or so, Chey organized a week-long "Privacy Film Festival" at Microsoft's headquarters. Four movies were screened at lunchtime on consecutive weekdays and then a Friday panel session brought in some privacy and security heavyweights (including both Don Parker and Ari Schwartz as I recall—movies included Enemy of the State and Minority Report). The overall feedback on the whole endeavor was very positive.

Item number three: the privacy meter. This also relates to the "need to raise awareness and/or spark discussions of privacy issues." I started using it in 2002 when talking to companies about what at that time was, for many of them, an emerging issue/concern/requirement.
 
The idea was to provide a starting point for reflection and conversation. The goal was to help everyone from management to employees to see that there were many different attitudes to personal privacy within the organization. What I did not convey back then—at least not as much as I probably should have—was the extent to which privilege and economic status can influence these attitudes. See the next item for more on that.

Item number Four is a privacy reading list, shamelessly headed by my 2016 white paper on data privacy law. While the paper does not cover developments in data privacy law in the last few years, several people have told me that the historical background it provides is very helpful, particularly when it comes to understanding why Data Privacy Day in America is Data Protection Day in many other countries. And, it does contain about 80 references, including links to all major US privacy legislation up into 2016.

Moving from privacy laws to privacy realities, like the intersection of privacy, poverty, and privilege, here are a number of thought-provoking articles you might want to read: 

Finally, getting back to a point raised earlier in this post, one that comes up every Data Privacy Day, here is my 2018 article "Data Privacy vs. Data Protection: Reflecting on Privacy Day and GDPR."

P.S. If you're on Twitter you might enjoy what I've been tweeting about #DataPrivacyDay

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Tuesday, January 05, 2021

AI's most troubling problem? It's made of chips and code

If we define "AI problem" as an obstacle to maximizing the benefits of Artificial Intelligence, it is clear that there are a number of these, ranging from the technical and practical to the ethical and cultural. As we say goodbye to 2020, I think that we may look back on it in, a few years' time, as the year in which some of the most serious AI problems emerged into the mainstream of public discourse. However, there is one very troubling gap in this growing awareness of AI problems, a seldom discussed problem that I present below.

Image of computer servers, visually distorted

Growing Doubts About AI?

As one data science publication put it, 2020 was: "marked by ethical issues of AI going mainstream, including, but not limited to, gender/race bias, police and military use, face recognition, surveillance, and deep fakes." — The State of AI in 2020.

One of the most widely discussed indicators of problems in AI in 2020 was the “Timnit Gebru incident” (More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru). This seems to be a debacle of Google’s own making, but it surfaced issues of AI bias, AI accountability, erosion of privacy, and environmental impact. 

As we enter 2021, bias seems to be the AI problem that is “enjoying” the widest awareness. A quick Google search for ai bias produces 139 million results, of which more than 300,000 appear as News. However, 2020 also brought growing concerns about attacks on the way AI systems work, and the ways in which AI can be used to commit harm, notably the "Malicious Uses and Abuses of Artificial Intelligence," produced by Trend Micro Research in conjunction with the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol’s European Cybercrime Centre (EC3). 

Thankfully, awareness of AI problems was much in evidence at the "The Global AI Summit," an online "think-in" that I attended last month. The event was organized by Tortoise Media and some frank discussion of AI problems occurred after the presentation of highlights from the heavily researched and data rich Global AI Index. Unfortunately, the AI problem that troubles me the most was not on the agenda (it was also absent from the Trend/UN report). 

AI's Chip and Code Problem

The stark reality, obscured by the hype around AI, is this: all implementations of AI are vulnerable to attacks on the hardware and software that run them. At the heart of every AI beats one or more CPUs running an operating system and applications. As someone who has spent decades studying and dealing with vulnerabilities in, and abuse of, chips and code, this is the AI problem that worries me the most:

AI RUNS ON CHIPS AND CODE, BOTH OF WHICH ARE VULNERABLE TO ABUSE

In the last 10 years we have seen successful attacks on the hardware and software at the heart of mission critical information systems in hundreds of prestigious entities  both commercial and governmental. The roll call of organizations and technologies that have proven vulnerable to abuse includes the CIA, NSA, DHS, NASA, Intel, Cisco, Microsoft, Fireye, Linux, SS7, and AWS. 

Yet despite a constant litany of new chip and code vulnerabilities, and wave after wave of cybercrime and systemic intrusions by nation states—some of which go undetected for months, even years—a constantly growing chorus of AI pundits persists in heralding imminent human reliance on AI systems as though it was an unequivocally good thing. 

Such "AI boosterism" keeps building, seemingly regardless of the large body of compelling evidence that supports this statement: no builder or operator of any computer system, including those that run AI, can guarantee that it will not be abused, misused, impaired, corrupted, or commandeered through unauthorized access or changes to its chips and code.

And this AI problem is even more more serious when you consider it is the one about which meaningful awareness seems to be lowest. Frankly, I've been amazed at how infrequently this underlying vulnerability of AI is publicly mentioned, noted, or addressed, where publicly means: "discoverable by me using Google and asking around in AI circles."

Of course, AI enthusiasts are not alone in assuming that, by the time their favorite technology is fully deployed, it will be magically immune to the chip-and-code vulnerabilities inherent in computing systems. Fans of space exploration are prone to similar assumptions. (Here's a suggestion for any journalists reading this: the next time you interview Elon Musk, ask him what kind of malware protection will be in place when he rides the SpaceX Starship to Mars.)

Boosters of every new technology — pun intended— seem destined to assume that the near future holds easy fixes for whatever downsides skeptics of that technology point out. Mankind has a habit of saying "we can fix that" but not actually fixing it, from the air-poisoning pollution of fossil fuels to ocean-clogging plastic waste. (I bet Mr. Musk sees no insurmountable problems with adding thousands of satellites to the Earth's growing shroud of space clutter.) 

I'm not sure if I'm the first person to say that the path to progress is paved with assumptions, but I'm pretty sure it's true. I would also observe that many new technologies arrive wearing a veil of assumptions. This is evident when people present AI as so virtuous and beneficent that it would be downright churlish and immodest of anyone to question the vulnerability of their enabling technology.

The Ethics of AI Boosterism

One question I kept coming back to in 2020 was this: how does one avert the giddy rush to deploy AI systems for critical missions before they can be adequately protected from abuse? While I am prepared to engage in more detailed discussions about the validity of my concerns, I do worry that these will get bogged down in technicalities of which there is limited understanding among the general public.

However, as 2020 progressed and "the ethics of AI" began to enjoy long-overdue public attention, another way of breaking through the veil of assumptions obscuring AI's inherent technical vulnerability occurred to me. Why not question the ethics of "AI boosterism"? I mean, surely we can all agree that advocating development and adoption of AI without adequately disclosing its limitations raises ethical questions.

Consider this statement: as AI improves, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases. How ethical is it to say that, given what we know about how vulnerable AI systems will be if the hardware and software on which they run is not significantly more secure than what we have available today?

To be ethical, any pitches for AI backing and adoption should come with a qualifier, something like "provided that the current limitations of the enabling technology can be overcome." For example, I would argue that the earlier statement about medical use of AI would not be ethical unless it was worded something like this: as AI improves, and if the current limitations of the enabling technology can be overcome, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases.

Unlikely? Far-fetched? Never going to happen? I am optimistic that the correct answer is no. But I invite doubters to imagine for just a moment how much better things might have gone, how much better we might feel about digital technology today, if previous innovations had come with a clear up-front warning about their potential for abuse.

40 digital technologies open to abuse
A few months ago, to help us all think about this, I wrote "A Brief History of Digital Technology Abuse." The article title refers  to "40 chapters" but these are only chapter headings that match the 40 items in this word cloud. I invite you to check it out.

In a few weeks I will have some statistics to share about the general public's awareness of AI problems. I will be sure to provide a link here.

In the meantime, I would love to hear from anyone about their work, or anyone else's, on the problem of defending systems that run AI against abuse. (Use the Comments or the contact form at the top of the page, or DM @zobb on Twitter.) 

Notes

If you found this article interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.