Wednesday, November 29, 2023

QR code abuse 2012-2023

QR Code Scam with Three QR Codes
QR code abuse is in the news again—see the list of headlines below—whch reminds me that I first wrote about this in 2012 (eleven years ago). Back then I made a short video to demonstrate one potential type of abuse, tricking people into visiting a malicious website:


As you can see from this video, there is plenty of potential for hijacking and misdirection via both QR and NFC technology, and that potential has existed for over a decade. In fact, this is a great example of how a known technology vulnerability can linger untapped for over a decade, before all the factors leading to active criminal exploitation align. 

In other words, just because a vulnerability has not yet been turned into a common crime, does not mean it never will be. For example, the potential for ransomware attacks was there for many years before criminals turned it into a profitable business. Back in 2016, I suggested that combining ransomware with the increasing automation of vehicles would eventually lead to a form of criminal exploitation that I dubbed jackware. As of now, jackware is not a thing, but by 2026 it well might be.

Here are some recent QR code scam headlines:

Saturday, November 04, 2023

Artificial Intelligence is really just another vulnerable, hackable, information system

Recent hype around Artificial Intelligence (AI) and the amazingly good and bad things that it can and will do has prompted me to remind everyone that every AI is an information system and every information system has vulnerabilities. 

These vulnerabilities put AI systems at risk of exploitation and abuse for selfish ends when the ‘right’ conditions arise. As a visual aid, I put together a checklist that shows the current status of the five essential ingredients of an AI:

Checklist that shows the current status of the five essential ingredients of an AI
Please let me know if you think I'm wrong about any of those checks and crosses (ticks and Xs if you prefer). 

According to classic criminology theory, the right conditions for exploitation of an information system, such as an AI, are as follows: 
  • a motivated offender, 
  • a suitable target, and 
  • the absence of a capable guardian. 
Both good and bad uses of AI will motivate targeting by offenders. Do CEOs, many of whom are pushing their organizations to adopt AI, realise that? Do they understand that all five ingredients of AI are vulnerable? If you need to give them examples, here's a starter list:
  1. Chips – Meltdown, Spectre, Rowhammer, Downfall
  2. Code – Firmware, OS, apps, viruses, worms, Trojans, logic bombs
  3. Data – Poisoning, micro and macro (e.g. LLMs and SEO poisoning)
  4. Connections – Remote access compromise, AITM attacks
  5. Electricity – Backhoe attack, malware e.g. BlackEnergy, Industroyer
Whether or not vulnerabilities in one or more of these five ingredients are maliciously exploited depends on risk/reward calculations with which many offenders are very familiar. If not capably guarded, vulnerabilities in AI implementations will be exploited by motivated offenders, for both "bad" and "good." (For a look at how capable guardianship in the digital realm is going take a look at the rate at which losses due to Internet crime are climbing in spite of record levels of spending on cybersecurity.) 

Some offenders will try to make money attacking AI systems relied upon by hospitals, schools, companies, governments, military, etc. Unfortunately, the criminal infrastructure to monetize the exploitation of vulnerabilities in information systems already exists (see: darkweb, malware as a service, botnets, ransomware, cryptocurrency, etc.)

Other offenders will try to stop an AI doing things of which they don’t approve: driving cars, taking jobs, firing weapons, educating children, making movies, exterminating humans.

How and at what level AI should be regulated are tough questions to answer. But we can take some comfort in the likelihood that, based on what has happened to every new digital technology in the last 40 years, AI will prove vulnerable to exploitation and abuse, in other words, less likely to deliver happiness or hell on earth than techbro fans or foes expect.