Stephen and Chey Cobb: Independent Researchers
Public-interest technology, information security, data privacy, risk and gender issues in tech
Wednesday, November 29, 2023
QR code abuse 2012-2023
Saturday, November 04, 2023
Artificial Intelligence is really just another vulnerable, hackable, information system
- a motivated offender,
- a suitable target, and
- the absence of a capable guardian.
- Chips – Meltdown, Spectre, Rowhammer, Downfall
- Code – Firmware, OS, apps, viruses, worms, Trojans, logic bombs
- Data – Poisoning, micro and macro (e.g. LLMs and SEO poisoning)
- Connections – Remote access compromise, AITM attacks
- Electricity – Backhoe attack, malware e.g. BlackEnergy, Industroyer
Wednesday, April 12, 2023
What is ChatGPT and how can AI get things wrong: an annotated example using jackware
![]() |
You can't trust what ChatGPT says |
The term "jackware" was coined by two researchers, Stefan Savage and Tadayoshi Kohno, in a 2016 paper titled "Experimental Security Analysis of a Modern Automobile". In the paper, Savage and Kohno described a proof-of-concept attack in which they were able to take control of a car's electronic systems using a laptop computer. They used the term "jackware" to describe a new class of malware that could be used to take control of vehicles in a similar way to how ransomware is used to take control of computer systems.
Frankly, at first glance this looks like is a very impressive answer: it names names and cites an academic source, while providing a helpful description of jackware, all articulated in grammatically correct English. It's way more helpful than the output of a plain old search. You can see why so many people get so excited about artificial intelligence.
But sadly, several key facts in ChatGPT's answer are just plain wrong, so wrong that you could be in serious trouble with your teacher-editor-readers if you pasted that paragraph of ChatGPT output into a piece of work. For a start, anyone familiar with the subject matter would know that you hadn't read that paper.
The academic paper to which ChaGPT refers was published in 2010, not 2016. You can see that from this listing of the paper. This is not just a pedantic quibble; the named paper is legendary in the world of automotive cybersecurity, partly because it was published way back in 2010. It documents groundbreaking work done by Savage et al. in the 2000s, way before the flashy Jeep hack of 2015 by Miller and Valasek.
More blatantly erroneous is the identification of this 2010 paper and its authors as the source of the term jackware. Simply put, the paper does not contain the word jackware. In fact, the person who coined the term jackware to describe malicious code used to take over vehicles, was me, Stephen Cobb, and I did that in May of 2016, on this blog, in a post titled: Jackware: coming soon to a car or truck near you?
In July of 2016, I penned Jackware: When connected cars meet ransomware for We Live Security, the award-winning global cybersecurity blog. As further evidence, I present exhibit A, which shows how you can iterative time-constrained searches to identify when something first appears. Constraining the search to the years 1998 to 2015, we see that no relevant mention of jackware was found prior to 2016:Apparently, jackware had been used as a collective noun for leather mugs, but there are no software-related search results before 2016. Next you can see that, when the search is expanded to include 2016, the We Live Security article tops the results:
So how did ChatGPT get things so wrong? The simple answer is that ChatGPT doesn't know what it's talking about. What it does know is how to string relevant words and numbers together in a plausible way. Stefan Savage is definitely relevant to car hacking. The year 2016 is relevant because that's when jackware was coined. And the research paper that ChatGPT referenced does contain numerous instances of the word jack. Why? Because the researchers wisely tested their automotive computer hacks on cars that were on jack stands.
To be clear, ChatGPT is not programmed to use a range of tools to make sure it is giving you the right answer. For example, it didn't perform an iterative time-constrained online search like the one I did in order to find the first use of a new term.
Hopefully, this example will help people see what I think is a massive gap between the bold claims made for artificial intelligence and the plain fact that AI is not yet intelligent in a way that equates to human intelligence. That means you cannot rely on ChatGPT to give you the right answer to your questions.
So what happens if we do get to a point where people rely—wisely or not—on AI? That's when AI will be maliciously targeted and abused by criminals, just like every other computer system, something I have written about here.
Ironically, the vulnerability of AI to abuse can be both a comfort to those who fear AI will exterminate humans, and a nightmare for those who dream of a blissful future powered by AI. In my opinion, the outlook for AI, at least for the next few decades, is likely to be a continuation of the enthusiasm-disillusionment cycle, with more AI winters to come.
Note 1: For more on those AI dreams and fears, I should first point out that they are based on expectations that the capabilities of AI will evolve from their current level to a far more powerful technology referred to as Artificial General Intelligence or AGI. For perspective on this, I recommend listening to "Eugenics and the Promise of Utopia through Artificial General Intelligence" by two of my Twitter friends, @timnitGebru and @xriskology. This is a good introduction the relationship between AI development and a bundle of beliefs/ideals/ideas known as TESCREAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism.
Note 2: When I first saw Google assume jackware was a typo for Jaguar I laughed out loud because I was born and raised in Coventry, England, the birthplace of Jaguar cars. In 2019, when my mum, who lives in Coventry, turned 90, Chey and I moved here, and that's where I am writing this. Jaguars are a common sight in our neighbourhood, not because it's a posh part of the city, but because a lot of folks around here work at Jaguar and have company cars.
Tuesday, March 14, 2023
Internet crime surged in 2022: possibly causing as much as $160 billion in non-financial losses
This increase, which comes on top of a 64% surge from 2020 to 2021, has serious implications for companies and consumers who use the Internet, as well as for law enforcement and government.
Those implications are discussed in an article that I wrote over on LinkedIn in the hope that more people will pay attention to the increasingly dire state of Internet crime prevention and deterrence, and how that impacts people. In that article I also discuss the growing awareness that Internet crime creates even more harm than is reflected in the financial losses suffered by victims. There is mounting evidence—some of which I cite in the article—that the health and wellbeing of individuals hit by online fraud suffers considerably, even in cases of attempted fraud where no financial loss occurs.
One UK study estimated the value of this damage at the equivalent of more than $4,000 per victim. Consider what happens if we round down the number of cases reported in the IC3/FBI annual summary for 2020 to 800,000, then assume that number reflects a fifth of the actual number of cases in which financial loss occurred. That's 4 million cases. Now assume those cases were one tenth of the attempted online crimes and multiply that 40 million by the $4,000 average hit to health and wellbeing estimated by researchers. The result is $160 billion, and that's just for one year; a huge amount of harm to individuals and society.
Saturday, December 17, 2022
Digital Baitballs and Shrinkage: a cybersecurity lesson from 2022
![]() |
A school of baitfish forming a ball to reduce predation (Shutterstock) |
If 2022 has taught us anything about cybersecurity, it is this: our combined efforts to protect the world's digital systems and the vital data that they process are capable of thwarting very high levels of sustained criminal activity, where "thwart" means preventing the complete collapse of trust in digital technology and limiting casualties to levels that appear to be survivable, if not acceptable.
In other words, despite all the efforts of bad actors, from local scammers to nation states, abusing all manner of digital technologies, to commit everything from petty crimes to war crimes, humans are surviving, and we are continuing to expand our reliance on said technologies.
Of course, this lesson would appear to offer little comfort to the victims of digital crime in 2022, the countless companies, consumers, non-profit organizations, and government entities that lost money and peace of mind to the hordes of ethically challenged and maliciously motivated perpetrators of cyber-badness.*
Is survival enough?
![]() |
Baitball and a swordfish (Shutterstock) |
For example, small fish that spend most of their lives in the open ocean form a tight group when predators approach; then they swirl around in a ball to make it harder for predators to select targets. I wrote about this phenomenon—the baitball—in a recent article on LinkedIn.
So, the good news for 2022 is that we can head into 2023 knowing that the world can survive a large amount of ongoing cyberbadness. We have seen that levels of criminal abuse of digital technology can rise quite high without resulting in the breakdown of society.
(You could even argue that cybercrime is falling in relation to the growing number of criminal opportunities created by the ongoing deployment of new digital technologies and devices, but that's for a different article.)
The bad news is that surviving is not as enjoyable and fulfilling as thriving. Living just this side of the breakdown of society means the other side is a looming presence, a constant stress factor, as is the knowledge that any one of us could be the next cybercrime victim.
Shrinkage
So what will it take to get from surviving to thriving, to a state in which cybercrime is either eliminated or reduced to a manageable level? Unfortunately, the short answer is: it will take a lot. The countries of the world need to agree to, and enforce, norms of ethical behaviour in the digital realm. If that sounds almost impossible given the current state of the world, then you have a measure of how much effort it is going to take to eliminate cybercrime or reduce it to a manageable level. However, it should be noted that the idea of reducing crime to a manageable level is not unprecedented.
Shopkeepers learned long ago that it is almost impossible to stop their stock from shrinking. Some employees will swipe stock from the stockroom. Some customers will shoplift. Furthermore, some vendors will over-charge and under-deliver. Taken together, these money-losing phenomena are known as shrinkage.
Despite efforts to reduce shrinkage, including the use of technology, it still cuts into retail revenue in America to the tune of 1.5% per year on average, equating to losses in the order of $100 billion in 2021. Nevertheless, despite shrinkage, the retail sector keeps going. Retailers don't expect to eliminate shrinkage, but they will spend time and money on measures to keep it to a relatively low percentage.
So what are the prospects for reducing the impact of cybercrime to a very low level, perhaps a very small percentage of GDP? I honestly don't know. We are still a long way from getting a full picture of cybercrime's impact; this is particularly true of the psychological and health impacts. There are hidden social and economic costs as well, given the not insignificant percentage of people who don't go online due to fear of cybercrime.
Some would argue that the term cybercrime is becoming problematic in discussions like this, given that most predatory crime today has "cyber" aspects. Fortunately, there is plenty of evidence that people who commit predatory crime can stop, and many do so as they get older, start families, get a "proper" job. In criminology this is known as desistance and may actually be easier for people with digital skills to desist.
In the broad scheme of things, the most intractable obstacle to reducing cyberbadness may not be predatory criminals clinging to a crooked lifestyle; it could well be humans who are prepared to use digital technologies like social media to spread disinformation, undermine truth, and foster hatred in furtherance of selfish agendas.
Note: To the best of my knowledge, the term cyber-badness was first coined by Cameron Camp, my friend and colleague at ESET.
Friday, July 22, 2022
Cobb's Guide to PC and LAN Security: the 30th anniversary of the first version
The Stephen Cobb Complete Book of PC and LAN Security first appeared in print in 1992, an amazing 30 years ago. In celebration of this anniversary, I'm reminding people that a PDF copy of the last version of the book is freely downloadable under a Creative Commons license.
While a lot of the book's technical content is now dated—a polite way of saying it is stuck in the late 1990s and thus mainly of historical interest—much of the theory and strategy still rings true
The large file size of this 700 page tome led me to publish it in three easily digestible parts: Part One; Part Two; and Part Three. (You can also scroll down the column on the right of this page for download inks.)
Despite the original title, which was imposed by the publisher, the volume that appeared 30 years ago was by no means a "complete book" on the subject; nor is it now a contemporary guide. However, you can still find it on Amazon, even though Amazon.com did not exist when the first version was published. The images on the left of this article are the current Amazon listings of the three versions (which I will explain shortly).If you are inclined to take this particular trip down computer security's memory lane, I suggest you download the free electronic version rather than purchase on Amazon. On that trip you will find a few items of note, such as this observation:
"The goal of personal computer security is to protect and foster the increased creativity and productivity made possible by a technology that has so far flourished with a minimum of controls, but which finds itself increasingly threatened by the very openness that led to its early success. To achieve this goal, you must step from an age of trusting innocence into a new era of realism and responsibility, without lurching into paranoia and repression."I'd say that's a decent piece of prognostication for 1992. It's one of the reasons I have kept the book available all these years, a mix of nostalgia, history, and first principles. Along with a number of friends and fellow security professionals—like Winn Schwartau, Bruce Schneier, and Jeff Moss—I am inclined to think that the parlous state of cybersecurity in 2022, relative to the level of cybercriminal activity, could have been avoided is only more people had taken our advice more seriiously in the 1990s.
Three Versions and a Free Version
I made a lot of changes when I turned that 1992 volume into The NCSA Guide to PC and LAN Security—a 700 page paperback that was published in 1995—but that edition is also very outdated these days. Around 12 years ago I obtained the copyright to these works and, through an arrangement with the Authors Guild, got it reprinted as Cobb's Guide to PC and LAN Security. This was done largely for sentimental reasons and the copies are only printed on demand.LEGAL STUFF: THIS FREE ELECTRONIC EDITION IS LICENSED BY THE AUTHOR FOR USE UNDER CREATIVE COMMONS, ATTRIBUTION, NONCOMMERCIAL, NO DERIVATES.
Computer Security Prognosis and Predictions
I plan to post more thoughts on computer security "then and now" but for now I leave you with another quote from the 1992 Stephen Cobb Complete Book of PC and LAN Security:"The most cost-effective long-term approach to personal computer security is the promotion of mature and responsible attitudes among users. Lasting security will not be achieved by technology, nor by constraints on those who use it. True security can only be achieved through the willing compliance of users with universally accepted principles of behavior. Such compliance will increase as society as a whole becomes increasingly computer literate, and users understand the personal value of the technology they use."
Monday, March 28, 2022
Big jump in losses due to Internet crimes in 2021, up 64% according to latest IC3/FBI report
![]() |
IC3/FBI internet crime data graphed by S. Cobb |
What's next for cybercrime and its victims?
- A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
- The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
- The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.
Thursday, April 29, 2021
From cyber-crime metrics to cyber-harm stories: shifting cybersecurity perspectives and cybercrime strategies
- Here's how big the cybercrime problem is.
- Here's how fast it is growing despite current efforts to solve/reduce it.
- Can you see how bad things will get if you don't do more to solve/reduce it?
- Personal: understand how I, or my organization, could be victimized and steps I can take to minimize the risk of that happening.
- Political: grasp the level of pain and suffering caused by digitally enabled or enhanced crimes, and calculate their impact on society, down to the medical and social care burdens that victimization generated.
- Strategic: use this perspective to argue that funding for medical and social care should include cyber-harm reduction initiatives because fewer people scammed = smaller care burden.
- Professional: pursue both qualitative and quantitative research into the harms caused by rampant cyberbadness, from criminal successes to cybersecurity fails.
The Carer Factor
- A detailed look at the impact of fraud in general, 24-page PDF of a chapter from the book Cyber Frauds, Scams and Their Victims by Cassandra Cross and Mark Button, 2017.
- The Fight Cybercrime website which has a lot of helpful info for victims of online fraud, in 12 languages!
- The source for the statistic that "older [scam] victims are 2.4 times more likely to die or go into a care home than those who are not scammed" — PDF of Age UK report, 2016.
- The website of Carers Trust in the UK: "a major charity for, with and about carers".
Thursday, March 18, 2021
As predicted, Internet crime surged in 2020, losses up 20% based on FBI and IC3 reports: analysis and opinion
Losses to individual and business victims of internet crime in 2020 exceeded $4 billion according to the recently published 2020 Internet Crimes Report from the FBI and IC3; this represents a 20% increase over losses reported in 2019. The number of complaints also rose dramatically, up nearly 70%.
![]() |
IC3/FBI internet crime data graphed by S. Cobb |
When the 2019 Internet Crimes Report was published in February of 2020 it documented "$3.5 billion in losses to individual and business victims."
What I said back then, about the loss number that I expected to see in the 2020 report, was this: "I certainly wouldn't bet against it blowing through $4 billion"
(Here's a link to the article where I said that).
Quite frankly, I'm not the least bit happy that I was right. Just as I take no pleasure in having been right for each of the last 20 years, when my annual response to "what does the year ahead look like for cybersecurity?" has been to say, with depressingly consistent accuracy: it's going to get worse before it gets better. As I see it, a 20% annual increase in losses to internet crime, despite record levels of spending on cybersecurity, is a clear indicator that current strategies for securing our digital world against criminal activity are not working.
A shred of hope?
However, like many cybersecurity professionals, I have always had an optimistic streak, a vein of hope compressed deep beneath the bedrock of my experience. (Periodically, we have to mine this hope to counter the urge to throw up our hands and declare: "We're screwed! Let's just go make music.")
So let me offer a small shred of hope.
I am honor bound to point out that cybercrime's impact last year may not have been as bad I had come to expect. Yes, at the start of 2020 I predicted that cybercrime would maintain its steep upward trajectory. I said the IC3/FBI loss number for 2020 would pass $4 billion and it did. But then "the Covid effect" kicked in, generating scores of headlines about criminal exploitation of the pandemic in both cyberspace and meatspace. And behind each of those headlines were thousands of victims experiencing a range of distressing psychological impacts and economic loss.
By the end of 2020 I was predicting that the IC3/FBI number could be as high as $4.7 billion (see my December, 2020, article: Cybersecurity had a rough 2020). In that context, the reported 2020 number of $4.2 billion was "better than expected." Indeed, the year-on-year increase from 2019 to 2020 of 20% was not as bad as the 2018-2019 increase of 29%.
However, when I look at the graph at the top of this article I'm not yet ready to say things are improving. And I'm very aware that every one of the 791,790 complaints of suspected internet crime that the IC3 catalogued in 2020—an increase of more than 300,000 from 2019—signifies a distressing incident that negatively impacted the victim, and often their family and friends as well.
In 2020, the pandemic proved to be a very criminogenic phenomenon. I'm pretty sure it also generated greater public awareness of statistical terms like growth curves, rolling averages, trend lines, dips, and plateaus. Right now I see no reason to think cybercrime will dip or even plateau in 2021. But let's hope I'm wrong and in the months and years to come there is a turnaround in the struggle to reduce to the abuse of digital technologies, hopefully before my vein of optimism is all mined out.
Disclaimer: I acknowledge that there are issues with using the IC3 numbers as crime metrics. For a start, they are not collected as an exercise in crime metrics, but rather as part of one avenue of attack against the crimes they represent, an issue I addressed in this law journal article. However, I have studied each IC3 annual report and am satisfied that collectively they reflect real world trends in cybercrime's impact on victims, as measured by direct monetary lost (the psychological impact of internet crime creates other costs, to victims and society, but so far we have done a woefully poor job of measuring those).Note:
If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!
Friday, March 05, 2021
Secu-ring video doorbells and other 'smart' security cameras: some helpful links
Are you thinking of installing a video doorbell or smart security camera? Are you concerned about the security of the one you have already installed? These links should help:
How to secure your Ring camera and account
https://www.theverge.com/2019/12/19/21030147/how-to-secure-ring-camera-account-amazon-set-up-2fa-password-strength-hack
Ring security camera settings
https://www.wired.co.uk/article/ring-security-camera-settings
Video doorbell security: How to stop your smart doorbell from being hacked
https://www.which.co.uk/reviews/smart-video-doorbells/article/video-doorbell-security-how-to-stop-your-smart-doorbell-from-being-hacked-aCklb4Y4rZnw
How the WYZE camera can be hacked
https://learncctv.com/can-the-wyze-camera-be-hacked/
How to secure your WYZE security camera account
https://www.cnet.com/how-to/wyze-camera-data-leak-how-to-secure-your-account-right-now/
How to protect 'smart' security cameras and baby monitors from cyber attack
https://www.ncsc.gov.uk/guidance/smart-security-cameras-using-them-safely-in-your-home
Yes, your security camera could be hacked: Here's how to stop spying eyes
https://www.cnet.com/how-to/yes-your-security-camera-could-be-hacked-heres-how-to-stop-spying-eyes/
On a related topic, and as a way to understand how hackers look for vulnerabilities in digital devices, check out this article at Hackaday: https://hackaday.com/2019/03/28/reverse-engineering-a-modern-ip-camera/. It links to a cool, four-part reverse engineering exercise by Alex Oporto: https://dalpix.com/reverse-engineering-ip-camera-part-1
Note:
If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!
Thursday, January 28, 2021
Data Privacy Day 2021: Selected data privacy reading and viewing, past and present
For this Day Privacy Day—January 28, 2021—I have put together an assortment of items, suggested resources and observations that might prove helpful.
The first item is time-sensitive: a live streamed virtual privacy day event: Data Privacy in an Era of Global Change. The event begins at Noon, New York time, 5PM London time, and features a wide range of excellent speakers. This is the latest iteration of an annual event organized by the National Cyber Security Alliance that going back at least seven years, each one live streamed.
The 2014 event included me on a panel at Pew Research in D.C., along with Omer Tene of the International Association of Privacy Professionals (IAPP), plus John Gevertz, Global Chief Privacy Officer of ADP, and Erin Egan, CPO of Facebook (which arranged the live streaming).
In 2015, I was on another Data Privacy Day panel, this one focused on medical data and health privacy. It featured Peter Swire who was heavily involved in the creation of the HIPAA. By request, I told the story of Frankie and Jamie, "A Tale of Medical Fraud" that involved identity theft with serious data privacy implications.Also on the panel were: Anne Adams, Chief Compliance & Privacy Officer for Emory Healthcare; Pam Dixon Executive Director of the World Privacy Forum, and Hilary M. Wandall, CPO of Merck—the person to whom I was listening very carefully in this still from the recorded video on Vimeo (which is still online but I could not get it to play):
The second item is The Circle, both the 2013 novel by Dave Eggers—my fairly lengthy review of which appears here—and the 2017 movie starring Emily Watson and Tom Hanks, the trailer for which should be playable below.
Moving from privacy laws to privacy realities, like the intersection of privacy, poverty, and privilege, here are a number of thought-provoking articles you might want to read:
- Check your privacy privilege, by Heather Burns, 2020
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks, 2018 ("systematically investigates the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America").
- Privacy for Whom? Sam Adler Bell, the New Inquiry, 2018
- Why Some Women Don't Actually Have Privacy Rights, Tanvi Misra, Bloomberg, 2017
- The Poverty of Privacy Rights, Khiara M. Bridges, 2016
- A Poor Mother's Right to Privacy: A Review, Danielle K. Citron, 2018
Finally, getting back to a point raised earlier in this post, one that comes up every Data Privacy Day, here is my 2018 article "Data Privacy vs. Data Protection: Reflecting on Privacy Day and GDPR."
P.S. If you're on Twitter you might enjoy what I've been tweeting about #DataPrivacyDay.
Note:
If you found this page interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!
Tuesday, January 05, 2021
AI's most troubling problem? It's made of chips and code
If we define "AI problem" as an obstacle to maximizing the benefits of Artificial Intelligence, it is clear that there are a number of these, ranging from the technical and practical to the ethical and cultural. As we say goodbye to 2020, I think that we may look back on it in, a few years' time, as the year in which some of the most serious AI problems emerged into the mainstream of public discourse. However, there is one very troubling gap in this growing awareness of AI problems, a seldom discussed problem that I present below.
Growing Doubts About AI?
As one data science publication put it, 2020 was: "marked by ethical issues of AI going mainstream, including, but not limited to, gender/race bias, police and military use, face recognition, surveillance, and deep fakes." — The State of AI in 2020.
One of the most widely discussed indicators of problems in AI in 2020 was the “Timnit Gebru incident” (More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru). This seems to be a debacle of Google’s own making, but it surfaced issues of AI bias, AI accountability, erosion of privacy, and environmental impact.
As we enter 2021, bias seems to be the AI problem that is “enjoying” the widest awareness. A quick Google search for ai bias produces 139 million results, of which more than 300,000 appear as News. However, 2020 also brought growing concerns about attacks on the way AI systems work, and the ways in which AI can be used to commit harm, notably the "Malicious Uses and Abuses of Artificial Intelligence," produced by Trend Micro Research in conjunction with the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol’s European Cybercrime Centre (EC3).
Thankfully, awareness of AI problems was much in evidence at the "The Global AI Summit," an online "think-in" that I attended last month. The event was organized by Tortoise Media and some frank discussion of AI problems occurred after the presentation of highlights from the heavily researched and data rich Global AI Index. Unfortunately, the AI problem that troubles me the most was not on the agenda (it was also absent from the Trend/UN report).
AI's Chip and Code Problem
The stark reality, obscured by the hype around AI, is this: all implementations of AI are vulnerable to attacks on the hardware and software that run them. At the heart of every AI beats one or more CPUs running an operating system and applications. As someone who has spent decades studying and dealing with vulnerabilities in, and abuse of, chips and code, this is the AI problem that worries me the most:
AI RUNS ON CHIPS AND CODE, BOTH OF WHICH ARE VULNERABLE TO ABUSE
In the last 10 years we have seen successful attacks on the hardware and software at the heart of mission critical information systems in hundreds of prestigious entities both commercial and governmental. The roll call of organizations and technologies that have proven vulnerable to abuse includes the CIA, NSA, DHS, NASA, Intel, Cisco, Microsoft, Fireye, Linux, SS7, and AWS.
Yet despite a constant litany of new chip and code vulnerabilities, and wave after wave of cybercrime and systemic intrusions by nation states—some of which go undetected for months, even years—a constantly growing chorus of AI pundits persists in heralding imminent human reliance on AI systems as though it was an unequivocally good thing.
Such "AI boosterism" keeps building, seemingly regardless of the large body of compelling evidence that supports this statement: no builder or operator of any computer system, including those that run AI, can guarantee that it will not be abused, misused, impaired, corrupted, or commandeered through unauthorized access or changes to its chips and code.
And this AI problem is even more more serious when you consider it is the one about which meaningful awareness seems to be lowest. Frankly, I've been amazed at how infrequently this underlying vulnerability of AI is publicly mentioned, noted, or addressed, where publicly means: "discoverable by me using Google and asking around in AI circles."
Of course, AI enthusiasts are not alone in assuming that, by the time their favorite technology is fully deployed, it will be magically immune to the chip-and-code vulnerabilities inherent in computing systems. Fans of space exploration are prone to similar assumptions. (Here's a suggestion for any journalists reading this: the next time you interview Elon Musk, ask him what kind of malware protection will be in place when he rides the SpaceX Starship to Mars.)
Boosters of every new technology — pun intended— seem destined to assume that the near future holds easy fixes for whatever downsides skeptics of that technology point out. Mankind has a habit of saying "we can fix that" but not actually fixing it, from the air-poisoning pollution of fossil fuels to ocean-clogging plastic waste. (I bet Mr. Musk sees no insurmountable problems with adding thousands of satellites to the Earth's growing shroud of space clutter.)
I'm not sure if I'm the first person to say that the path to progress is paved with assumptions, but I'm pretty sure it's true. I would also observe that many new technologies arrive wearing a veil of assumptions. This is evident when people present AI as so virtuous and beneficent that it would be downright churlish and immodest of anyone to question the vulnerability of their enabling technology.
The Ethics of AI Boosterism
One question I kept coming back to in 2020 was this: how does one avert the giddy rush to deploy AI systems for critical missions before they can be adequately protected from abuse? While I am prepared to engage in more detailed discussions about the validity of my concerns, I do worry that these will get bogged down in technicalities of which there is limited understanding among the general public.
However, as 2020 progressed and "the ethics of AI" began to enjoy long-overdue public attention, another way of breaking through the veil of assumptions obscuring AI's inherent technical vulnerability occurred to me. Why not question the ethics of "AI boosterism"? I mean, surely we can all agree that advocating development and adoption of AI without adequately disclosing its limitations raises ethical questions.
Consider this statement: as AI improves, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases. How ethical is it to say that, given what we know about how vulnerable AI systems will be if the hardware and software on which they run is not significantly more secure than what we have available today?
To be ethical, any pitches for AI backing and adoption should come with a qualifier, something like "provided that the current limitations of the enabling technology can be overcome." For example, I would argue that the earlier statement about medical use of AI would not be ethical unless it was worded something like this: as AI improves, and if the current limitations of the enabling technology can be overcome, doctors will be able to rely upon AI systems for faster diagnosis of more and more diseases.
Unlikely? Far-fetched? Never going to happen? I am optimistic that the correct answer is no. But I invite doubters to imagine for just a moment how much better things might have gone, how much better we might feel about digital technology today, if previous innovations had come with a clear up-front warning about their potential for abuse.![]() |
40 digital technologies open to abuse |
In a few weeks I will have some statistics to share about the general public's awareness of AI problems. I will be sure to provide a link here. (See: AI problem awareness grew in 2020, but 46% still “not aware at all” of problems with artificial intelligence.)
In the meantime, I would love to hear from anyone about their work, or anyone else's, on the problem of defending systems that run AI against abuse. (Use the Comments or the contact form at the top of the page, or DM @zobb on Twitter.)
Notes:
If you found this article interesting or helpful or both, please consider clicking the button below to buy me a coffee and support a good cause, while fueling more independent research and ad-free content like this. Thanks!
Thursday, December 31, 2020
Cybersecurity had a rough 2020, but 50 recent headlines suggest the outlook for 2021 could be even worse
Sadly, my annual outlook for cybersecurity has, for the past 20 years, been this: "things will get worse before they get better."
In this context, "the outlook for cybersecurity" is the expected performance of efforts to defend information systems from abuse, as measured by the amount of system abuse that occurs despite those efforts.
If you boil cybersecurity outlook down to a single question it is this: will criminal acts targeting digital systems and the data they process cause more harm next year than they did this year?On the right you can see just one measure of such harm, a dollar figure for internet crime losses reported to IC3 and the FBI. The losses recorded in this metric hit $3.5B in 2019.*
I predict that for 2020, the IC3/FBI report will show around $4.7B in losses, barring significant changes to the report's methodology. I further predict that the number will reach $6B in 2021.
Of course, I could be wrong, and I sincerely hope that the losses turn out to be lower than my predictions. What I can promise is that I will post the 2020 number as soon as it is published (about 45 days from now, if the Biden-Harris administration sticks to the traditional schedule).
One way of looking at the problem
Regardless of the IC3/FBI numbers for 2020, I think that criminal acts targeting digital systems and the data they process will cause more harm in 2021 than they did this year. And I say that despite 2020 being a quite unusual year, what with all that cybercrime which leveraged the pandemic, and the presidential election in the US, plus the massive Russian SolarWinds breaches.
The rest of this blog post is just one way of documenting why my outlook is bleak (I am working on a longer article about the history of my "will get worse before it gets better" perspective). What you have here are 50 cybersecurity headlines that I noticed during the last 30 days of 2020. These are not ALL the cybercrime headlines from December, 2020. These they are just a sample, plucked from one of the best cybersecurity "feeds" that I have found: InfoSecSherpa's Newsletter (subscription strongly recommended).
This daily email newsletter is produced by @InfoSecSherpa who pledges to provide: "a daily summary of 10 Information Security news items that aren't necessarily getting a lot of attention." So, here are 50 items I picked out to reflect the range of cyber-criminal activity currently taking place. I'm not saying that you should read them all. I think a quick scan will make my point:
- Fresh Card Skimmer Attacks Multiple E-Commerce Platforms
- Massive Cyber Attack Takes Down Major German Newsgroup
- Kawasaki Heavy Industries reports data breach as attackers found with year-long network access
- Cruise Ships Forced to Cancel Sailings Due to Possible Cyberattack
- Vietnam targeted in complex supply chain attack
- Serious attack on our democracy': Cyber strike hits Finnish MPs
- REvil hackers to leak photos of plastic surgery patients after massive hack
- VOIP hardware and software maker Sangoma struck by ransomware attack
- Hackers Tapped Microsoft Resellers To Gain Access
- Rakuten exposes 1.48 million sets of data to access from outside
- Pension Plan Personal Data Breached, Third-Party Blamed
- Russian crypto-exchange Livecoin hacked after it lost control of its servers
- Major Swedish firms suffer prolonged malware attack
- Emotet Returns to Hit 100K Mailboxes Per Day
- U.S. Cyber Agency: SolarWinds Attack Hitting Local Governments
- Credential phishing attack impersonating USPS targets consumers over the holidays
- Japanese Companies Fall Victim To Unprecedented Wave of Cyber Attacks
- Louisville PVA office temporarily closes due to a cyber threat
- Treasury Dept. email accounts were compromised in hack blamed on Russia
- Iranian hackers hit Israel aerospace industries
- iPhones vulnerable to hacking tool for months, researchers say | Malware
- Two Rubygems Infected With Crypto-Stealing Feature Malware
- Ransomware Attackers Using SystemBC Malware With Tor Proxy
- Cybercrime: Fake call centre duping foreign nationals busted in Delhi, 54 arrested
- House purchases in Hackney fall through following cyber attack against council
- Print security is the remote working cyber risk very few saw coming
- Poland, Lithuania are targets of cyber disinformation attack
- Norwegian cruise liner Hurtigruten sustains cyber attack
- Port of Kennewick crippled by cyberattack
- Two Indian banks affected by Windows ransomware attacks
- Iran suspected after massive cyberattack on Israeli firms revealed
- Files expose mass infiltration of UK firms by Chinese Communist Party
- Subway customers receive 'malware' emails
- KC suburb spent millions on cyber security protections but still got hit by ransomware
- Ransomware Attacks Hitting Vulnerable MySQL Servers
- Hackers leak data from trucking firm Cardinal Logistics
- Adrozek Malware Delivers Fake Ads to 30K Devices a Day
- New Malware Arsenal Abusing Cloud Platforms in Middle East Espionage Campaign
- Springfield Public Schools servers back to normal after October cyberattack that put abrupt pause to remote learning
- Ransomware gangs are now cold-calling victims if they restore from backups without paying
- Middle East facing 'cyber pandemic' as Covid exposes security vulnerabilities, cyber chief says
- Vancouver Metro Disrupted by Egregor Ransomware
- 113,000 Alaskan voter IDs exposed in data breach
- Data of 243 million Brazilians exposed online via website source code
- Cyberattacks Discovered on Vaccine Distribution Operations
- Brazilian aerospace firm Embraer hit by cyberattack
- Malware may trick biologists into generating dangerous toxins in their labs
- Spoofed FBI Internet Domains Pose Cyber and Disinformation Risks
- Cyber attacks against vaccine makers rise
- MacOS Users Targeted By OceanLotus Backdoor
These headlines paint a picture of rampant criminal activity abusing all manner of digital technology in all regions of the world, across all sectors of human endeavor, including education, research, medicine, healthcare, pharmaceuticals, heavy industry, light industry, commercial shipping, recreational shipping, retail, banking, software, hardware, the media, local government, state government, national government.
These headlines also document the main reason that I think the harm caused by such activity in 2021 will be even greater than in 2020: whatever deterrents there are to people continuing to engage in this type of activity, they are clearly not working. And in 2021 there will be more people than ever with both the motive and means to engage in cybercrime, and more opportunities than ever to commit cybercrime.
- Motive increase: widespread pandemic-related economic hardship
- Means increase: constantly improving cybercrime skills, increasingly accessible (e.g. crime-as-a-service)
- Opportunities increase: more devices and data, in more locations, performing increasingly valuable functions
As 2021 rolls on I will continue to document the scale of the cybersecurity challenge as I see it. For now, let me extend a massive THANK YOU to all the dedicated and righteous souls who labored so hard in 2020 to fend off the bad actors.
Is there any room for optimism in 2021? Maybe, if the Biden Harris administration is allowed to get on with the job of instigating major improvements in globally coordinated cybercrime deterrence. (And to be clear, I do sincerely hope that six months from now reality will show that my current outlook was overly pessimistic.)
In any event, here's to "cyber" becoming way less crimey in 2021. Happy New Year!
Notes
* While IC3 is the source of the numbers in the graph, IC3 has not—to my knowledge—published them in a graph, in other words, I built the graph from their numbers. And I know that the IC3 numbers are by no means perfect crime metrics; they are based on data that is accumulated as a by-product of one avenue of attack against the crimes they represent. However, I have studied each of the annual report and I am satisfied that collectively they provide solid evidence of a real world cybercrime impact trend that looks very much like the line shown in the graph. For more on issues with cybercrime measurement, see my article in the Journal of National Security Law & Policy: Advancing Accurate and Objective Cybercrime Metrics.