Monday, April 01, 2024

Internet crime keeps on growing, as do efforts to understand the harm it causes

Internet crime losses 2014-2023, as reported to IC3/FBI,
 and compiled by S. Cobb
Losses from Internet crimes reported to the FBI's Internet Crime and Complaint Center in 2023 rose 22% above the record losses in 2022. 

This means that 2023 set a new annual record, just north of $12.5 billion, according to the press release announcing the latest IC3 annual report (PDF)

About the only good thing you can say about this news is that the annual Internet crime loss figure rose by only 22% in 2023. That is less than half the 49% increase in in 2022, which was well below the 64% surge in 2021. However, before anyone gets too optimistic, take another look at the chart at the top of the page. 

While there have been several years this century in which rate of increase in losses to Internet crime has slowed down, I see the general direction over the last decade as fairly relentlessly upward. And this is despite record levels of spending on cybersecurity and cybercrime deterrence.

This time last year I discussed the implications of these trends in an article over on LinkedIn. That was written in the hope that more people will pay attention to the increasingly dire state of Internet crime prevention and deterrence, and how that impacts ordinary people. At the start of this year, I wrote about the implications of digitally-enabled fraud reaching record levels, framing this as a public health crisis. 

During 2023, I delivered and recorded a well-received talk on cybercrime as a public health crisis. Here is the video, hosted on YouTube.

The talk was originally delivered at the Technical Summit and Researchers Sync-Up 2023 in Ireland. The event was organized by the European arm of APWG, the global Anti-Phishing Working Group. (Talks at that event were not recorded, so I made this recording myself; sadly, it lacks the usual gesticulation and audience interaction of my live delivery, but on the plus side you can speed up the playback on YouTube.)

Also sad is the fact that, due to carer/caregiver commitments, I had to cancel delivery of the next stage of my research at APWG's Symposium on Electronic Crime Research 2023 (eCrime 2023)

On the bright side, I did manage to write up my ideas in an article on Medium: Do Online Access Imperatives Violate Duty of Care? There I started building my case that exposure to crime online causes harm even to those who are not directly victimized by it, much in the same way that living in a high crime neighbourhood has been proven—by criminologists and epidemiologists—to be bad for human health. Basically, the article made four assertions:

  1. going online exposes us to a lot of crime, 
  2. high crime environments are unhealthy, 
  3. governments and companies that make us go online may be breaching their duty of care, 
  4. there is an urgent need to reduce cybercrime and increase support for cybercrime victims.

To explain these assertions I introduced my "Five levels of crime impact in meatspace and cyberspace" which are captured in this table:

Screenshot of Cobb's Five levels of crime impact in meatspace and cyberspace
I also introduced my take on a concept used by environmental exposure scientists and epidemiologists: the exposome. A key role of the exposome is to help us acknowledge and account for everything to which we are exposed in our daily lives that may affect our health. 

My article proposed using online exposome as a term for everything that individuals are exposed to when they go online. This builds on thinking by Guillermo Lopez-Campos et al. (2017) that there is a "digital component of the exposome derived from the interactions of individuals with the digital world."

In summary, as we look over the latest tabulation of reported financial losses due to Internet crimes I think we need to bear in mind that these are only a fraction of the total number of such crimes, and monetary loss is only a fraction of the harm these crimes cause. The stress and anxiety of victims has to be taken into account, as does the deleterious effect of having to spend time online where we are constantly exposed to, and reminded of, the many different ways in which digital technologies and their users are being abused. 

Postscript: Not all the news about online crime is bad. The last 12 months have seen some very impressive anti-cybercrime law enforcement efforts all around the world, including the recent disruption of "the world’s most harmful cyber crime group." I applaud those efforts and encourage governments to fund more of them. Here's to a drop in Internet crime losses in 2024!

Wednesday, November 29, 2023

QR code abuse 2012-2023

QR Code Scam with Three QR Codes
QR code abuse is in the news again—see the list of headlines below—whch reminds me that I first wrote about this in 2012 (eleven years ago). Back then I made a short video to demonstrate one potential type of abuse, tricking people into visiting a malicious website:


As you can see from this video, there is plenty of potential for hijacking and misdirection via both QR and NFC technology, and that potential has existed for over a decade. In fact, this is a great example of how a known technology vulnerability can linger untapped for over a decade, before all the factors leading to active criminal exploitation align. 

In other words, just because a vulnerability has not yet been turned into a common crime, does not mean it never will be. For example, the potential for ransomware attacks was there for many years before criminals turned it into a profitable business. Back in 2016, I suggested that combining ransomware with the increasing automation of vehicles would eventually lead to a form of criminal exploitation that I dubbed jackware. As of now, jackware is not a thing, but by 2026 it well might be.

Here are some recent QR code scam headlines:

Saturday, November 04, 2023

Artificial Intelligence is really just another vulnerable, hackable, information system

Recent hype around Artificial Intelligence (AI) and the amazingly good and bad things that it can and may do has prompted me to remind the world that: 
Every AI is an information system and every information system has fundamental vulnerabilities that make it susceptible to attack and abuse.
The fundamental information system vulnerabilities exist regardless of what the system is designed to do, whether that is processing payments, piloting a plane, or generating artificial intelligence.

Fundamental information system vulnerabilities put AI systems at risk of exploitation and abuse for selfish ends when the ‘right’ conditions arise. As a visual aid, I put together a checklist that shows the current status of the five essential ingredients of an AI:

Checklist that shows the current status of the five essential ingredients of an AI
Please let me know if you think I'm wrong about any of those checks and crosses (ticks and Xs if you prefer). 


Criminology and Computing and AI

According to routine activity theory in criminology, the right conditions for exploitation of an information system, such as an AI, are as follows: 
  • a motivated offender, 
  • a suitable target, and 
  • the absence of a capable guardian. 
A motivated offender can be anyone who wants to enrich themselves at the expense of others. In terms of computer crime this could be a shoplifter who turned to online scamming (an example personally related to me by a senior law enforcement official in Scotland). 

In the world of computing, a suitable target can be any exploitable information system, such as the payment processing system at a retail store. (Ironically the Target retail chain was the target of one of the most widely reported computer crimes of the last ten years.) 

In the context of information systems, the absence of a capable guardian can be the lack of properly installed and managed anti-malware software, or an organization's failure to grasp the level of risk inherent in the use of digital technologies.

When it comes to information systems that perform artificial intelligence work, both the good and bad uses of AI will motivate targeting by offenders. The information systems at Target One were hit because they contained credit card details that could be sold to people who specialize in fraudulent card transactions. An AI trained on corporate financial data could be targeted to steal or exploit that data. An AI that enables unmanned vehicles could be targeted for extortion, just as hospital and local government IT systems are targeted.

Do AI fans even know this?

One has to wonder how many of the CEOs who are currently pushing their organizations to adopt AI understand all of this. Do they understand that all five ingredients of AI are vulnerable? 

Perhaps companies and governments should initiate executive level AI vulnerability awareness programs. If you need to talk to your execs, it will help if you can give them vulnerability examples. Here's a starter list:
  1. Chips – Meltdown, Spectre, Rowhammer, Downfall
  2. Code – Firmware, OS, apps, viruses, worms, Trojans, logic bombs
  3. Data – Poisoning, micro and macro (e.g. LLMs and SEO poisoning)
  4. Connections – Remote access compromise, AITM attacks
  5. Electricity – Backhoe attack, malware e.g. BlackEnergy, Industroyer
Whether or not vulnerabilities in one or more of these five ingredients are maliciously exploited depends on complex risk/reward calculations. However, execs need to know that many motivated offenders are adept at such calculations. 

Execs also need to understand that there is an entire infrastructure already in place to monetize vulnerability exploitation. They are sophisticated markets in which to: sell stolen data, stolen access, stolen credentials; and buy or rent the tools to do the stealing, ransoming, etc. (see darkweb, malware as a service, botnets, ransomware, cryptocurrency, etc.).

As I see it, unless there is a sudden, global outbreak of moral rectitude, vulnerabilities in AI systems will—if they are not capably guarded—be exploited by motivated offenders. 

Internet crime losses reported to IC3/FBI
For a sense of how capable guardianship in the digital realm is going, take a look at the rate at which losses due to Internet crime have risen in the last 10 years despite of record levels of spending on cybersecurity.

Attacks will target AI systems used for both "good" and "bad" purposes. Some offenders will try to make money attacking AI systems relied upon by hospitals, schools, companies, governments, military, etc. Other offenders will try to stop AI systems that are doing things of which they don’t approve: driving cars, taking jobs, firing weapons, educating children, making movies, exterminating humans.

Therein lies one piece of good news: we can take some comfort in the likelihood that, based on what has happened to every new digital technology in the last 40 years, AI systems will prove vulnerable to exploitation and abuse, thus reducing the chances that AI will be able to wipe us all out. Of course, it also means AI is not likely to make human life dramatically better.

Note: This is a revised version of an article that first appeared in November of 2023.

Wednesday, April 12, 2023

What is ChatGPT and how can AI get things wrong: an annotated example using jackware

An example of ChatGPT giving a wrong answer
You can't trust what ChatGPT says
ChatGPT is, as you probably know, a computer system that uses artificial intelligence (AI) to answer questions. Sometimes the answers it gives are wrong, and that's the short version of this article. The long version explains more about what ChatGPT is, with a detailed look at an example of how wrong it can be. 

Here's how Mashable describes ChatGPT in Everything you need to know about ChatGPT: "in essence, a simple online artificial intelligence chatbot created by OpenAI in December 2022." Technically speaking, ChatGPT describes itself as "a language model developed by OpenAI, based on the GPT (Generative Pre-trained Transformer) architecture...designed to understand and respond to natural language queries and generate human-like text in response." 

If you check out ChatGPT online it can come across as a chatty version of a search engine, but it can do things that search engines don't, such as put together plausible explanations of phenomena, as opposed to simply listing search results relating to them. For example, suppose you encounter the word jackware and wonder what it means. You could put the word into a search box and get a bunch of responses, like this:
If you are lucky, one or more of the search results will give you an idea of what the search term means. In this case we can see that "The goal of a jackware attack is to lock up the hijacked device until you pay" (which happens to be correct). 

But what if you're writing an article or essay about jackware and want to know who came up with the term and when it was coined? I couldn't find that information in the search results for jackware, and looking for it took time. In my experience, search engines often fail to answer questions like this, as you can see from the following example. When asked who coined the term jackware, Google provides one link to a story about jackware, but all the rest of the results—many thousands of them—assume that jackware is just a typo for Jaguar, as in the car, not the animal:
Screenshot of Google assume jackware is a typo for jaguar
Personally, I find the jackware/jaguar assumption very amusing—for reasons I will explain later—but first, let's see how ChatGPT gets on when asked the same question: Who coined the term jackware? Here's the answer ChatGPT provides:

The term "jackware" was coined by two researchers, Stefan Savage and Tadayoshi Kohno, in a 2016 paper titled "Experimental Security Analysis of a Modern Automobile". In the paper, Savage and Kohno described a proof-of-concept attack in which they were able to take control of a car's electronic systems using a laptop computer. They used the term "jackware" to describe a new class of malware that could be used to take control of vehicles in a similar way to how ransomware is used to take control of computer systems.

Frankly, at first glance this looks like is a very impressive answer: it names names and cites an academic source, while providing a helpful description of jackware, all articulated in grammatically correct English. It's way more helpful than the output of a plain old search. You can see why so many people get so excited about artificial intelligence. 

But sadly, several key facts in ChatGPT's answer are just plain wrong, so wrong that you could be in serious trouble with your teacher-editor-readers if you pasted that paragraph of ChatGPT output into a piece of work. For a start, anyone familiar with the subject matter would know that you hadn't read that paper.

The academic paper to which ChaGPT refers was published in 2010, not 2016. You can see that from this listing of the paper. This is not just a pedantic quibble; the named paper is legendary in the world of automotive cybersecurity, partly because it was published way back in 2010. It documents groundbreaking work done by Savage et al. in the 2000s, way before the flashy Jeep hack of 2015 by Miller and Valasek.

More blatantly erroneous is the identification of this 2010 paper and its authors as the source of the term jackware. Simply put, the paper does not contain the word jackware. In fact, the person who coined the term jackware to describe malicious code used to take over vehicles, was me, Stephen Cobb, and I did that in May of 2016, on this blog, in a post titled: Jackware: coming soon to a car or truck near you? 

In July of 2016, I penned Jackware: When connected cars meet ransomware for We Live Security, the award-winning global cybersecurity blog. As further evidence, I present exhibit A, which shows how you can iterative time-constrained searches to identify when something first appears. Constraining the search to the years 1998 to 2015, we see that no relevant mention of jackware was found prior to 2016:Apparently, jackware had been used as a collective noun for leather mugs, but there are no software-related search results before 2016. Next you can see that, when the search is expanded to include 2016, the We Live Security article tops the results:

So how did ChatGPT get things so wrong? The simple answer is that ChatGPT doesn't know what it's talking about. What it does know is how to string relevant words and numbers together in a plausible way. Stefan Savage is definitely relevant to car hacking. The year 2016 is relevant because that's when jackware was coined. And the research paper that ChatGPT referenced does contain numerous instances of the word jack. Why? Because the researchers wisely tested their automotive computer hacks on cars that were on jack stands.

To be clear, ChatGPT is not programmed to use a range of tools to make sure it is giving you the right answer. For example, it didn't perform an iterative time-constrained online search like the one I did in order to find the first use of a new term. 

Hopefully, this example will help people see what I think is a massive gap between the bold claims made for artificial intelligence and the plain fact that AI is not yet intelligent in a way that equates to human intelligence. That means you cannot rely on ChatGPT to give you the right answer to your questions. 

So what happens if we do get to a point where people rely—wisely or not—on AI? That's when AI will be maliciously targeted and abused by criminals, just like every other computer system, something I have written about here.

Ironically, the vulnerability of AI to abuse can be both a comfort to those who fear AI will exterminate humans, and a nightmare for those who dream of a blissful future powered by AI. In my opinion, the outlook for AI, at least for the next few decades, is likely to be a continuation of the enthusiasm-disillusionment cycle, with more AI winters to come.

--------------^-------------
 

Note 1: For more on those AI dreams and fears, I should first point out that they are based on expectations that the capabilities of AI will evolve from their current level to a far more powerful technology referred to as Artificial General Intelligence or AGI. For perspective on this, I recommend listening to "Eugenics and the Promise of Utopia through Artificial General Intelligence" by two of my Twitter friends, @timnitGebru and @xriskology. This is a good introduction the relationship between AI development and a bundle of beliefs/ideals/ideas known as TESCREAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism.

Note 2: When I first saw Google assume jackware was a typo for Jaguar I laughed out loud because I was born and raised in Coventry, England, the birthplace of Jaguar cars. In 2019, when my mum, who lives in Coventry, turned 90, Chey and I moved here, and that's where I am writing this. Jaguars are a common sight in our neighbourhood, not because it's a posh part of the city, but because a lot of folks around here work at Jaguar and have company cars.


Tuesday, March 14, 2023

Internet crime surged in 2022: possibly causing as much as $160 billion in non-financial losses

Chart of annual Internet crime losses reported to IC3/FBI 2012-22, as compiled by S. Cobb
Financial losses reported to the FBI's Internet Crime Complaint Center in 2022 rose almost 50% over the prior year, reaching $10.3 billion according to the recently released annual report (available here). 

This increase, which comes on top of a 64% surge from 2020 to 2021, has serious implications for companies and consumers who use the Internet, as well as for law enforcement and government.

Those implications are discussed in an article that I wrote over on LinkedIn in the hope that more people will pay attention to the increasingly dire state of Internet crime prevention and deterrence, and how that impacts people. In that article I also discuss the growing awareness that Internet crime creates even more harm than is reflected in the financial losses suffered by victims. There is mounting evidence—some of which I cite in the article—that the health and wellbeing of individuals hit by online fraud suffers considerably, even in cases of attempted fraud where no financial loss occurs. 

One UK study estimated the value of this damage at the equivalent of more than $4,000 per victim. Consider what happens if we round down the number of cases reported in the IC3/FBI annual summary for 2020 to 800,000, then assume that number reflects a fifth of the actual number of cases in which financial loss occurred. That's 4 million cases. Now assume those cases were one tenth of the attempted online crimes and multiply that 40 million by the $4,000 average hit to health and wellbeing estimated by researchers. The result is $160 billion, and that's just for one year; a huge amount of harm to individuals and society. 


Saturday, December 17, 2022

Digital Baitballs and Shrinkage: a cybersecurity lesson from 2022

A school of fish forming a baitball to minimize predation
A school of baitfish forming a ball to reduce predation (Shutterstock) 

If 2022 has taught us anything about cybersecurity, it is this: our combined efforts to protect the world's digital systems and the vital data that they process are capable of thwarting very high levels of sustained criminal activity, where "thwart" means preventing the complete collapse of trust in digital technology and limiting casualties to levels that appear to be survivable, if not acceptable.  

In other words, despite all the efforts of bad actors, from local scammers to nation states, abusing all manner of digital technologies, to commit everything from petty crimes to war crimes, humans are surviving, and we are continuing to expand our reliance on said technologies.

Of course, this lesson would appear to offer little comfort to the victims of digital crime in 2022, the countless companies, consumers, non-profit organizations, and government entities that lost money and peace of mind to the hordes of ethically challenged and maliciously motivated perpetrators of cyber-badness.*

Is survival enough?

Swordfish checking out a baitball
Baitball and a swordfish (Shutterstock)
You could argue that humans are in deep trouble if the best we can say about the struggle between cybersecurity and cybercrime at the end of 2022 is: "most of us survived." However, other species on our planet have endured for millions of years by embracing "most of us survive" as the goal of their defensive strategy. 

For example, small fish that spend most of their lives in the open ocean form a tight group when predators approach; then they swirl around in a ball to make it harder for predators to select targets. I wrote about this phenomenon—the baitball—in a recent article on LinkedIn.

So, the good news for 2022 is that we can head into 2023 knowing that the world can survive a large amount of ongoing cyberbadness. We have seen that levels of criminal abuse of digital technology can rise quite high without resulting in the breakdown of society. 

(You could even argue that cybercrime is falling in relation to the growing number of criminal opportunities created by the ongoing deployment of new digital technologies and devices, but that's for a different article.)

The bad news is that surviving is not as enjoyable and fulfilling as thriving. Living just this side of the breakdown of society means the other side is a looming presence, a constant stress factor, as is the knowledge that any one of us could be the next cybercrime victim.

Shrinkage

So what will it take to get from surviving to thriving, to a state in which cybercrime is either eliminated or reduced to a manageable level? Unfortunately, the short answer is: it will take a lot. The countries of the world need to agree to, and enforce, norms of ethical behaviour in the digital realm. If that sounds almost impossible given the current state of the world, then you have a measure of how much effort it is going to take to eliminate cybercrime or reduce it to a manageable level. However, it should be noted that the idea of reducing crime to a manageable level is not unprecedented. 

Shopkeepers learned long ago that it is almost impossible to stop their stock from shrinking. Some employees will swipe stock from the stockroom. Some customers will shoplift. Furthermore, some vendors will over-charge and under-deliver. Taken together, these money-losing phenomena are known as shrinkage. 

Despite efforts to reduce shrinkage, including the use of technology, it still cuts into retail revenue in America to the tune of 1.5% per year on average, equating to losses in the order of $100 billion in 2021. Nevertheless, despite shrinkage, the retail sector keeps going. Retailers don't expect to eliminate shrinkage, but they will spend time and money on measures to keep it to a relatively low percentage.

So what are the prospects for reducing the impact of cybercrime to a very low level, perhaps a very small percentage of GDP? I honestly don't know. We are still a long way from getting a full picture of cybercrime's impact; this is particularly true of the psychological and health impacts. There are hidden social and economic costs as well, given the not insignificant percentage of people who don't go online due to fear of cybercrime.

Some would argue that the term cybercrime is becoming problematic in discussions like this, given that most predatory crime today has "cyber" aspects. Fortunately, there is plenty of evidence that people who commit predatory crime can stop, and many do so as they get older, start families, get a "proper" job. In criminology this is known as desistance and may actually be easier for people with digital skills to desist.

In the broad scheme of things, the most intractable obstacle to reducing cyberbadness may not be predatory criminals clinging to a crooked lifestyle; it could well be humans who are prepared to use digital technologies like social media to spread disinformation, undermine truth, and foster hatred in furtherance of selfish agendas.


Note: 
To the best of my knowledge, the term cyber-badness was first coined by Cameron Camp, my friend and colleague at ESET.

Friday, July 22, 2022

Cobb's Guide to PC and LAN Security: the 30th anniversary of the first version

The Stephen Cobb Complete Book of PC and LAN Security first appeared in print in 1992, an amazing 30 years ago. In celebration of this anniversary, I'm reminding people that a PDF copy of the last version of the book is freely downloadable under a Creative Commons license. 

While a lot of the book's technical content is now dated—a polite way of saying it is stuck in the late 1990s and thus mainly of historical interest—much of the theory and strategy still rings true 

The large file size of this 700 page tome led me to publish it in three easily digestible parts: Part One; Part Two; and Part Three. (You can also scroll down the column on the right of this page for download inks.)

Despite the original title, which was imposed by the publisher, the volume that appeared 30 years ago was by no means a "complete book" on the subject; nor is it now a contemporary guide. However, you can still find it on Amazon, even though Amazon.com did not exist when the first version was published. The images on the left of this article are the current Amazon listings of the three versions (which I will explain shortly).

If you are inclined to take this particular trip down computer security's memory lane, I suggest you download the free electronic version rather than purchase on Amazon. On that trip you will find a few items of note, such as this observation:
"The goal of personal computer security is to protect and foster the increased creativity and productivity made possible by a technology that has so far flourished with a minimum of controls, but which finds itself increasingly threatened by the very openness that led to its early success. To achieve this goal, you must step from an age of trusting innocence into a new era of realism and responsibility, without lurching into paranoia and repression."
I'd say that's a decent piece of prognostication for 1992. It's one of the reasons I have kept the book available all these years, a mix of nostalgia, history, and first principles. Along with a number of friends and fellow security professionals—like Winn Schwartau, Bruce Schneier, and Jeff Moss—I am inclined to think that the parlous state of cybersecurity in 2022, relative to the level of cybercriminal activity, could have been avoided is only more people had taken our advice more seriiously in the 1990s.

Three Versions and a Free Version

I made a lot of changes when I turned that 1992 volume into The NCSA Guide to PC and LAN Security—a 700 page paperback that was published in 1995—but that edition is also very outdated these days. Around 12 years ago I obtained the copyright to these works and, through an arrangement with the Authors Guild, got it reprinted as Cobb's Guide to PC and LAN Security. This was done largely for sentimental reasons and the copies are only printed on demand. 

However, in that process I obtained a high resolution scan of the entire book. I then converted this to text using Adobe OCR software. The result is what I have put online. (Warning: you may encounter OCR errors and artifacts; no claims are made as to accuracy of the information in this document; use at your own risk and discretion, etc.).
LEGAL STUFF: THIS FREE ELECTRONIC EDITION IS LICENSED BY THE AUTHOR FOR USE UNDER CREATIVE COMMONS, ATTRIBUTION, NONCOMMERCIAL, NO DERIVATES. 

Computer Security Prognosis and Predictions 

I plan to post more thoughts on computer security "then and now" but for now I leave you with another quote from the 1992 Stephen Cobb Complete Book of PC and LAN Security:
"The most cost-effective long-term approach to personal computer security is the promotion of mature and responsible attitudes among users. Lasting security will not be achieved by technology, nor by constraints on those who use it. True security can only be achieved through the willing compliance of users with universally accepted principles of behavior. Such compliance will increase as society as a whole becomes increasingly computer literate, and users understand the personal value of the technology they use."

Monday, March 28, 2022

Big jump in losses due to Internet crimes in 2021, up 64% according to latest IC3/FBI report

IC3/FBI internet crime data graphed by S. Cobb
In 2021, the world came to rely on digital technologies even more than it had in 2020. Sadly, but quite predictably, at least from my perspective, 2021 also saw a lot more sleazy digital scams and dastardly data breaches than 2020. 

How much more were the estimated losses suffered by individuals and businesses who reported internet crimes to IC3 in 2021? They were up 64% over 2020 according to the recently published 2021 Internet Crimes Report from the FBI and IC3, the Internet Crime Complaint Center.

The annual figure for this Internet crime metric rose from US$4.2 billion in 2020 to US$6.9 billion in 2021. That's almost a doubling in two years, from the 2019 figure of US$3.5 billion. The rise in losses from 2020 to 2021 was the second steepest annual increase in the last decade (2017-2018 saw a 91% jump).
 
While there are some issues with using the IC3 numbers as crime metrics—they were not originally collected as an exercise in crime metrics, but rather as an avenue of attack against the crimes they represent—I have studied each IC3 annual report and am satisfied that they reflect real world trends in cybercrime's impact on victims, as measured by direct monetary loss. (You can find out more about this in my article, Advancing Accurate and Objective Cybercrime Metrics in the Journal of National Security Law & Policy.)

When you put a 64% rise in annual internet crime losses in the context of record levels of spending on cybersecurity in recent years, it says to me that current strategies for securing our digital world against criminal activity are not working as well as they should. For more on cybercrime metrics relative to cybersecurity efforts, see this blog post from last year.

For more on the work that IC3 and the FBI do, please download the 2021 report, and any of the previous reports. If you're a criminology or risk and security geek like me, they make for interesting reading. The report lets you see which types of crime were on the increase in 2021—e.g. there is a growing overlap between romance scams and cryptocurrency fraud—and what steps IC3 has been taking to mitigate scams. The report's chart of losses by age group in 2021 was frankly depressing: older members of society are being hit hard by digital scammers.

What's next for cybercrime and its victims?

Firstly, I think we have to be honest with ourselves and acknowledge that, as human activities go, the abuse of digital systems for selfish ends has been a runaway success. Second, we need to realize that we are all victims of this success, regardless of whether or not we have lost any money as a direct result on such abuse. 

As I have said elsewhere, the psychological impact of internet crime creates significant costs, to victims and to society. People lose self-esteem, confidence, and trust. They may need counselling. Their productivity may suffer. Unfortunately, we have not done a good job of measuring harms from criminal abuse of digital systems that are not easily summed up as "how much did you lose?" 

One recent step in the right direction was research in the UK prompted by the consumer group Which? and reported here by the BBC. As the article states, the annual cost of the impact of scams on wellbeing was calculated to be £9.3 billion (roughly US$13 billion). The research suggested  that "scam victims faced a drop in life satisfaction, significantly higher levels of anxiety, and lower levels of happiness." In addition, some victims reported "worse general health." Those findings echo this one in 2014 from the non-profit senior support organization Age UK: "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed." 

When you translate these non-financial harms into the costs they produce: "The average drop in wellbeing for victims of fraud has been valued at £2,509 per year. For online fraud, this estimate is even higher at £3,684" (Which?). 

Now, if assume that this UK estimate holds true in the US and turn £3,684 into US dollars we get roughly $5,000 per victim. I know this is guesswork, but I'd really love to see some entity replicate the Which? research in the US. Because, if that $5,000 proves to be a valid assumption, and we multiply it  by the number of people reporting crimes to IC3 (847,376 in 2021) we get a figure that represents: "the personal and social cost of Internet crimes reported to IC3 in 2021 in addition to the reported financial losses." 

And that number is a whopping US$4.2 billion (which is a bit uncanny because that same figure was the IC3 financial loss total for 2020). Then, if you put that US$4.2 billion together with the IC3 loss number for 2021 (US$6.9 billion) you're looking at an attention-grabbing annual impact for reported Internet crime of more than US$11 billion; hopefully, enough attention to get more public resources channeled into Internet crime prevention and victim support.

Notes:
  • A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
  • The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
  • The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.

Thursday, April 29, 2021

From cyber-crime metrics to cyber-harm stories: shifting cybersecurity perspectives and cybercrime strategies

Is measuring the amount of cybercrime important? I have argued that it is, and for several different reasons which I have presented in many places; for example, in this article: Advancing Accurate and Objective Cybercrime Metrics in the Journal of National Security Law & Policy

For me, the most pressing reason to pursue accurate and objective cybercrime metrics is the potential of those numbers to persuade governments and world leaders to do more to counter cybercrime (as in: detect, deter, disrupt, prosecute and sanction perpetrators). The persuasion goes like this: 
  1. Here's how big the cybercrime problem is.
  2. Here's how fast it is growing despite current efforts to solve/reduce it.
  3. Can you see how bad things will get if you don't do more to solve/reduce it?
A similar persuasion strategy has long existed in the cybersecurity industry as part of its efforts to make technology safer (while selling more security products and services—a reality that has undermined the value of industry metrics in policy debates). 

The efficacy of this strategy—"look at these numbers, that's how bad the cyberbadness is, it's time you did more to protect us/you"—has been been disappointing to say the least, given the rate at which the cybercrime problem keeps growing. 

Back in 2014, I decided to research this lack of efficacy, exploring risk perception as it relates to crime and technology. I delved into cultural theory of risk, cultural cognition, white male effect, identity protective cognition, and the science of science communication. One thing I learned was that some people are unmoved by statistics and data. 

Relying on stats+facts to convince everyone that there is an urgent problem, one which merits attention and action, is a mistake. For whatever reason, some folk are relatively immune to stats+facts; however, they may be moved by stories.

Ironically, this was a phenomenon that I had already experienced in my early days of promoting security solutions. For some audiences there was nothing more effective than a case study, a story of how some person or organization had become a victim, or how someone had avoided becoming a victim. Even before then, when I was writing my first computer security book, I had made sure that I included stories from which people could learn the value of security policies and practices (The Stephen Cobb Handbook of PC and LAN Security, 1991). 

The problem you run into when you try to use victim stories to pitch security is that, historically, very few people have been willing to share their stories. This may be due to embarrassment or, ironically, for operational reasons. (As a CISSP, I would advise organizations not to share the helpful story of "how Acme firewall is keeping us safe," or the helpful tale of "how our network was penetrated despite Acme firewall.")

All of which leads to some helpful coincidences. If you investigate the amount of harm caused by cybercrime, rather than just count the number of cybercrimes committed, you get more than just persuasive data, you get moving stories. 

Furthermore, you get a fresh perspective on the problem of cybercrime and the challenge of getting more people to take it more seriously, at four different levels:
  1. Personal: understand how I, or my organization, could be victimized and steps I can take to minimize the risk of that happening.
  2. Political: grasp the level of pain and suffering caused by digitally enabled or enhanced crimes, and calculate their impact on society, down to the medical and social care burdens that victimization generated.
  3. Strategic: use this perspective to argue that funding for medical and social care should include cyber-harm reduction initiatives because fewer people scammed  = smaller care burden.
  4. Professional: pursue both qualitative and quantitative research into the harms caused by rampant cyberbadness, from criminal successes to cybersecurity fails.
Moving forward, I want to explore all four levels and share what I find. The process took a step forward this week when I talked myself into delivering a training session about scam avoidance to a community support group. I've done this in the past, but in America. This session will be delivered to a UK audience, specifically people who support carers. 

The Carer Factor

Since we moved back to the UK in 2019, we have found that the importance of social care and the work of unpaid carers is widely-recognized. These carers—who tend to be known as caregivers in America—are people who have become part-time or full-time unpaid carers for relatives and friends. (As you can imagine, part of that care work may include technical support, and that may include several aspects of cybersecurity.)

Local governments and charities in the UK make a concerted effort to support unpaid carers, both practically and emotionally. Let me give you an example: thanks to a charity called Carers Trust,  I am formally registered as the designated carer for my partner Chey, and for my mother. That means, among other things, that if I get hit by a bus and first responders check my wallet, they will find a card that says I care for these two people plus a number to call if I am incapacitated. 

That call triggers several services. Carers Trust will step in to provide care to my carees if I cannot be there for them. The organization already has a comprehensive file on the needs of my carees, their circumstances, and so on. Furthermore if the bus misses me, but I feel like I could really use a break from caring, the carers' support group can cover for me.

I'm sure you can imagine what a huge weight this care group has lifted from my shoulders, and how much peace of mind it has provided to my carees, now they know that there is backup help available. On a less dramatic, but still very important level, the care group provides me a place to meet with other carers and I find this helpful, both psychologically and practically.

My involvement with the care community has led me to consider fresh lines of inquiry into the reduction of cybercrime and technology abuse. Indeed, I can see this care group, and the many others like it around the country, becoming a valuable resource in the quest to reduce the harms caused by scammers and fraudsters.

If you check back here in the latter part of May there should have a link to the training session content. (Like all of my content these days, it is free and suitable for sharing.) In the meantime, here are some links that might be of interest:
  • A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
  • The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
  • The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.
  • The website of Carers Trust in the UK: "a major charity for, with and about carers".

Note: If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.



Thursday, March 18, 2021

As predicted, Internet crime surged in 2020, losses up 20% based on FBI and IC3 reports: analysis and opinion

Losses to individual and business victims of internet crime in 2020 exceeded $4 billion according to the recently published 2020 Internet Crimes Report from the FBI and IC3; this represents a 20% increase over losses reported in 2019. The number of complaints also rose dramatically, up nearly 70%.

IC3/FBI internet crime data graphed by S. Cobb
Throughout 2020, criminologists and cybersecurity experts had expressed growing fears that 2020 would be a big year for internet crime, particularly as it became clear that many criminals were prepared to ruthlessly exploit the COVID-19 pandemic for their own selfish ends.

When the 2019 Internet Crimes Report was published in February of 2020 it documented "$3.5 billion in losses to individual and business victims."

What I said back then, about the loss number that I expected to see in the 2020 report, was this: "I certainly wouldn't bet against it blowing through $4 billion"

(Here's a link to the article where I said that). 

Quite frankly, I'm not the least bit happy that I was right. Just as I take no pleasure in having been right for each of the last 20 years, when my annual response to "what does the year ahead look like for cybersecurity?" has been to say, with depressingly consistent accuracy: it's going to get worse before it gets better. As I see it, a 20% annual increase in losses to internet crime, despite record levels of spending on cybersecurity, is a clear indicator that current strategies for securing our digital world against criminal activity are not working.

A shred of hope?

However, like many cybersecurity professionals, I have always had an optimistic streak, a vein of hope compressed deep beneath the bedrock of my experience. (Periodically, we have to mine this hope to counter the urge to throw up our hands and declare: "We're screwed! Let's just go make music.")

So let me offer a small shred of hope. 

I am honor bound to point out that cybercrime's impact last year may not have been as bad I had come to expect. Yes, at the start of 2020 I predicted that cybercrime would maintain its steep upward trajectory. I said the IC3/FBI loss number for 2020 would pass $4 billion and it did. But then "the Covid effect" kicked in, generating scores of headlines about criminal exploitation of the pandemic in both cyberspace and meatspace. And behind each of those headlines were thousands of victims experiencing a range of distressing psychological impacts and economic loss.

By the end of 2020 I was predicting that the IC3/FBI number could be as high as $4.7 billion (see my December, 2020, article: Cybersecurity had a rough 2020). In that context, the reported 2020 number of $4.2 billion was "better than expected." Indeed, the year-on-year increase from 2019 to 2020 of 20% was not as bad as the 2018-2019 increase of 29%. 

However, when I look at the graph at the top of this article I'm not yet ready to say things are improving. And I'm very aware that every one of the 791,790 complaints of suspected internet crime that the IC3 catalogued in 2020—an increase of more than 300,000 from 2019—signifies a distressing incident that negatively impacted the victim, and often their family and friends as well.

In 2020, the pandemic proved to be a very criminogenic phenomenon. I'm pretty sure it also generated greater public awareness of statistical terms like growth curves, rolling averages, trend lines, dips, and plateaus. Right now I see no reason to think cybercrime will dip or even plateau in 2021. But let's hope I'm wrong and in the months and years to come there is a turnaround in the struggle to reduce to the abuse of digital technologies, hopefully before my vein of optimism is all mined out.

Disclaimer: I acknowledge that there are issues with using the IC3 numbers as crime metrics. For a start, they are not collected as an exercise in crime metrics, but rather as part of one avenue of attack against the crimes they represent, an issue I addressed in this law journal article. However, I have studied each IC3 annual report and am satisfied that collectively they reflect real world trends in cybercrime's impact on victims, as measured by direct monetary lost (the psychological impact of internet crime creates other costs, to victims and society, but so far we have done a woefully poor job of measuring those).

As soon as I get a chance I will dig deeper into the 2020 IC3/FBI report and report back; I'm particularly interested in trends impacting the "60 and over" demographic which @Chey_Cobb and I highlighted in the IEEE piece we wrote about age tech after last year's report

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Friday, March 05, 2021

Secu-ring video doorbells and other 'smart' security cameras: some helpful links

Photo of a doorbell by Yan Ots. Available freely on @unsplash.

Are you thinking of installing a video doorbell or smart security camera? Are you concerned about the security of the one you have already installed? These links should help: 

How to secure your Ring camera and account
https://www.theverge.com/2019/12/19/21030147/how-to-secure-ring-camera-account-amazon-set-up-2fa-password-strength-hack

Ring security camera settings
https://www.wired.co.uk/article/ring-security-camera-settings

Video doorbell security: How to stop your smart doorbell from being hacked
https://www.which.co.uk/reviews/smart-video-doorbells/article/video-doorbell-security-how-to-stop-your-smart-doorbell-from-being-hacked-aCklb4Y4rZnw

How the WYZE camera can be hacked
https://learncctv.com/can-the-wyze-camera-be-hacked/

How to secure your WYZE security camera account
https://www.cnet.com/how-to/wyze-camera-data-leak-how-to-secure-your-account-right-now/

How to protect 'smart' security cameras and baby monitors from cyber attack
https://www.ncsc.gov.uk/guidance/smart-security-cameras-using-them-safely-in-your-home

Yes, your security camera could be hacked: Here's how to stop spying eyes
https://www.cnet.com/how-to/yes-your-security-camera-could-be-hacked-heres-how-to-stop-spying-eyes/

On a related topic, and as a way to understand how hackers look for vulnerabilities in digital devices, check out this article at Hackaday: https://hackaday.com/2019/03/28/reverse-engineering-a-modern-ip-camera/. It links to a cool, four-part reverse engineering exercise by Alex Oporto: https://dalpix.com/reverse-engineering-ip-camera-part-1

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Thursday, January 28, 2021

Data Privacy Day 2021: Selected data privacy reading and viewing, past and present

For this Day Privacy Day—January 28, 2021—I have put together an assortment of items, suggested resources and observations that might prove helpful. 

The first item is time-sensitive: a live streamed virtual privacy day event: Data Privacy in an Era of Global Change. The event begins at Noon, New York time, 5PM London time, and features a wide range of excellent speakers. This is the latest iteration of an annual event organized by the National Cyber Security Alliance that going back at least seven years, each one live streamed.

The 2014 event included me on a panel at Pew Research in D.C., along with Omer Tene of the International Association of Privacy Professionals (IAPP), plus John Gevertz, Global Chief Privacy Officer of ADP, and Erin Egan, CPO of Facebook (which arranged the live streaming). 

In 2015, I was on another Data Privacy Day panel, this one focused on medical data and health privacy. It featured Peter Swire who was heavily involved in the creation of the HIPAA. By request, I told the story of Frankie and Jamie, "A Tale of Medical Fraud" that involved identity theft with serious data privacy implications.

Also on the panel were: Anne Adams, Chief Compliance & Privacy Officer for Emory Healthcare; Pam Dixon Executive Director of the World Privacy Forum, and Hilary M. Wandall, CPO of Merck—the person to whom I was listening very carefully in this still from the recorded video on Vimeo (which is still online but I could not get it to play):

The second item is The Circle, both the 2013 novel by Dave Eggers—my fairly lengthy review of which appears here—and the 2017 movie starring Emily Watson and Tom Hanks, the trailer for which should be playable below.


While many critics didn't like the film (Metascore is only 43), the content was close enough to the book for me to enjoy it (bearing in mind that I'm someone who's "into" data privacy). Also, the film managed to convey some of the privacy nuances central to Eggers' prescient novel. Consider the affirmation often used by the social media company at the heart of the story: "Sharing is caring." This is used to guilt trip users into sharing more and more of their lives with more and more people, because some of those people derive emotional and psychological support from that sharing. 

Depending on where in the world you live, you may be able to catch The Circle on either Amazon Prime or Netflix (although the latter has—ironically, and possibly intentionally so—a reality TV series of the same name, the premise of which is about as depressing as it gets: ""Big Brother" meets "Catfish" on this reality series on which not everything is as it seems").

Note, if you are working in any sort of "need to raise awareness and/or spark discussions of privacy issues" role then films can be very helpful. Back around 2005 or so, Chey organized a week-long "Privacy Film Festival" at Microsoft's headquarters. Four movies were screened at lunchtime on consecutive weekdays and then a Friday panel session brought in some privacy and security heavyweights (including both Don Parker and Ari Schwartz as I recall—movies included Enemy of the State and Minority Report). The overall feedback on the whole endeavor was very positive.

Item number three: the privacy meter. This also relates to the "need to raise awareness and/or spark discussions of privacy issues." I started using it in 2002 when talking to companies about what at that time was, for many of them, an emerging issue/concern/requirement.
 
The idea was to provide a starting point for reflection and conversation. The goal was to help everyone from management to employees to see that there were many different attitudes to personal privacy within the organization. What I did not convey back then—at least not as much as I probably should have—was the extent to which privilege and economic status can influence these attitudes. See the next item for more on that.

Item number Four is a privacy reading list, shamelessly headed by my 2016 white paper on data privacy law. While the paper does not cover developments in data privacy law in the last few years, several people have told me that the historical background it provides is very helpful, particularly when it comes to understanding why Data Privacy Day in America is Data Protection Day in many other countries. And, it does contain about 80 references, including links to all major US privacy legislation up into 2016.

Moving from privacy laws to privacy realities, like the intersection of privacy, poverty, and privilege, here are a number of thought-provoking articles you might want to read: 

Finally, getting back to a point raised earlier in this post, one that comes up every Data Privacy Day, here is my 2018 article "Data Privacy vs. Data Protection: Reflecting on Privacy Day and GDPR."

P.S. If you're on Twitter you might enjoy what I've been tweeting about #DataPrivacyDay

Note:

If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.

Tuesday, January 05, 2021

AI's most troubling problem? It's made of chips and code

If we define "AI problem" as an obstacle to maximizing the benefits of Artificial Intelligence, it is clear that there are a number of these, ranging from the technical and practical to the ethical and cultural. As we say goodbye to 2020, I think that we may look back on it in, a few years' time, as the year in which some of the most serious AI problems emerged into the mainstream of public discourse. However, there is one very troubling gap in this growing awareness of AI problems, a seldom discussed problem that I present below.

Image of computer servers, visually distorted

Growing Doubts About AI?

As one data science publication put it, 2020 was: "marked by ethical issues of AI going mainstream, including, but not limited to, gender/race bias, police and military use, face recognition, surveillance, and deep fakes." — The State of AI in 2020.

One of the most widely discussed indicators of problems in AI in 2020 was the “Timnit Gebru incident” (More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru). This seems to be a debacle of Google’s own making, but it surfaced issues of AI bias, AI accountability, erosion of privacy, and environmental impact. 

As we enter 2021, bias seems to be the AI problem that is “enjoying” the widest awareness. A quick Google search for ai bias produces 139 million results, of which more than 300,000 appear as News. However, 2020 also brought growing concerns about attacks on the way AI systems work, and the ways in which AI can be used to commit harm, notably the "Malicious Uses and Abuses of Artificial Intelligence," produced by Trend Micro Research in conjunction with the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol’s European Cybercrime Centre (EC3). 

Thankfully, awareness of AI problems was much in evidence at the "The Global AI Summit," an online "think-in" that I attended last month. The event was organized by Tortoise Media and some frank discussion of AI problems occurred after the presentation of highlights from the heavily researched and data rich Global AI Index. Unfortunately, the AI problem that troubles me the most was not on the agenda (it was also absent from the Trend/UN report). 

AI's Chip and Code Problem

The stark reality, obscured by the hype around AI, is this: all implementations of AI are vulnerable to attacks on the hardware and software that run them. At the heart of every AI beats one or more CPUs running an operating system and applications. As someone who has spent decades studying and dealing with vulnerabilities in, and abuse of, chips and code, this is the AI problem that worries me the most:

AI RUNS ON CHIPS AND CODE, BOTH OF WHICH ARE VULNERABLE TO ABUSE

In the last 10 years we have seen successful attacks on the hardware and software at the heart of mission critical information systems in hundreds of prestigious entities  both commercial and governmental. The roll call of organizations and technologies that have proven vulnerable to abuse includes the CIA, NSA, DHS, NASA, Intel, Cisco, Microsoft, Fireye, Linux, SS7, and AWS. 

Yet despite a constant litany of new chip and code vulnerabilities, and wave after wave of cybercrime and systemic intrusions by nation states—some of which go undetected for months, even years—a constantly growing chorus of AI pundits persists in heralding imminent human reliance on AI systems as though it was an unequivocally good thing. 

Such "AI boosterism" keeps building, seemingly regardless of the large body of compelling evidence that supports this statement: no builder or operator of any computer system, including those that run AI, can guarantee that it will not be abused, misused, impaired, corrupted, or commandeered through unauthorized access or changes to its chips and code.

And this AI problem is even more more serious when you consider it is the one about which meaningful awareness seems to be lowest. Frankly, I've been amazed at how infrequently this underlying vulnerability of AI is publicly mentioned, noted, or addressed, where publicly means: "discoverable by me using Google and asking around in AI circles."

Of course, AI enthusiasts are not alone in assuming that, by the time their favorite technology is fully deployed, it will be magically immune to the chip-and-code vulnerabilities inherent in computing systems. Fans of space exploration are prone to similar assumptions. (Here's a suggestion for any journalists reading this: the next time you interview Elon Musk, ask him what kind of malware protection will be in place when he rides the SpaceX Starship to Mars.)

Boosters of every new technology — pun intended— seem destined to assume that the near future holds easy fixes for whatever downsides skeptics of that technology point out. Mankind has a habit of saying "we can fix that" but not actually fixing it, from the air-poisoning pollution of fossil fuels to ocean-clogging plastic waste. (I bet Mr. Musk sees no insurmountable problems with adding thousands of satellites to the Earth's growing shroud of space clutter.) 

I'm not sure if I'm the first person to say that the path to progress is paved with assumptions, but I'm pretty sure it's true. I would also observe that many new technologies arrive wearing a veil of assumptions. This is evident when people present AI as so virtuous and beneficent that it would be downright churlish and immodest of anyone to question the vulnerability of their enabling technology.

The Ethics of AI Boosterism

One question I kept coming back to in 2020 was this: how does one avert the giddy rush to deploy AI systems for critical missions before they can be adequately protected from abuse? While I am prepared to engage in more detailed discussions about the validity of my concerns, I do worry that these will get bogged down in technicalities of which there is limited understanding among the general public.

However, as 2020 progressed and "the ethics of AI" began to enjoy long-overdue public attention, another way of breaking through the veil of assumptions obscuring AI's inherent technical vulnerability occurred to me. Why not question the ethics of "AI boosterism"? I mean, surely we can all agree that advocating development and adoption of AI without adequately disclosing its limitations raises ethical questions.

Consider this statement: as AI improves, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases. How ethical is it to say that, given what we know about how vulnerable AI systems will be if the hardware and software on which they run is not significantly more secure than what we have available today?

To be ethical, any pitches for AI backing and adoption should come with a qualifier, something like "provided that the current limitations of the enabling technology can be overcome." For example, I would argue that the earlier statement about medical use of AI would not be ethical unless it was worded something like this: as AI improves, and if the current limitations of the enabling technology can be overcome, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases.

Unlikely? Far-fetched? Never going to happen? I am optimistic that the correct answer is no. But I invite doubters to imagine for just a moment how much better things might have gone, how much better we might feel about digital technology today, if previous innovations had come with a clear up-front warning about their potential for abuse.

40 digital technologies open to abuse
A few months ago, to help us all think about this, I wrote "A Brief History of Digital Technology Abuse." The article title refers  to "40 chapters" but these are only chapter headings that match the 40 items in this word cloud. I invite you to check it out.

In a few weeks I will have some statistics to share about the general public's awareness of AI problems. I will be sure to provide a link here. (See: AI problem awareness grew in 2020, but 46% still “not aware at all” of problems with artificial intelligence.)

In the meantime, I would love to hear from anyone about their work, or anyone else's, on the problem of defending systems that run AI against abuse. (Use the Comments or the contact form at the top of the page, or check out my socials on Linktree.) 

Notes

If you found this article interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!

Button says Buy Me a Coffee, in case you feel like supporting more writing like this.