Pages

Saturday, November 04, 2023

Artificial Intelligence is really just another vulnerable, hackable, information system

Recent hype around Artificial Intelligence (AI) and the amazingly good and bad things that it can and may do has prompted me to remind the world that: 
Every AI is an information system and every information system has fundamental vulnerabilities that make it susceptible to attack and abuse.
The fundamental information system vulnerabilities exist regardless of what the system is designed to do, whether that is processing payments, piloting a plane, or generating artificial intelligence.

Fundamental information system vulnerabilities put AI systems at risk of exploitation and abuse for selfish ends when the ‘right’ conditions arise. As a visual aid, I put together a checklist that shows the current status of the five essential ingredients of an AI:

Checklist that shows the current status of the five essential ingredients of an AI
Please let me know if you think I'm wrong about any of those checks and crosses (ticks and Xs if you prefer). 


Criminology and Computing and AI

According to routine activity theory in criminology, the right conditions for exploitation of an information system, such as an AI, are as follows: 
  • a motivated offender, 
  • a suitable target, and 
  • the absence of a capable guardian. 
A motivated offender can be anyone who wants to enrich themselves at the expense of others. In terms of computer crime this could be a shoplifter who turned to online scamming (an example personally related to me by a senior law enforcement official in Scotland). 

In the world of computing, a suitable target can be any exploitable information system, such as the payment processing system at a retail store. (Ironically the Target retail chain was the target of one of the most widely reported computer crimes of the last ten years.) 

In the context of information systems, the absence of a capable guardian can be the lack of properly installed and managed anti-malware software, or an organization's failure to grasp the level of risk inherent in the use of digital technologies.

When it comes to information systems that perform artificial intelligence work, both the good and bad uses of AI will motivate targeting by offenders. The information systems at Target One were hit because they contained credit card details that could be sold to people who specialize in fraudulent card transactions. An AI trained on corporate financial data could be targeted to steal or exploit that data. An AI that enables unmanned vehicles could be targeted for extortion, just as hospital and local government IT systems are targeted.

Do AI fans even know this?

One has to wonder how many of the CEOs who are currently pushing their organizations to adopt AI understand all of this. Do they understand that all five ingredients of AI are vulnerable? 

Perhaps companies and governments should initiate executive level AI vulnerability awareness programs. If you need to talk to your execs, it will help if you can give them vulnerability examples. Here's a starter list:
  1. Chips – Meltdown, Spectre, Rowhammer, Downfall
  2. Code – Firmware, OS, apps, viruses, worms, Trojans, logic bombs
  3. Data – Poisoning, micro and macro (e.g. LLMs and SEO poisoning)
  4. Connections – Remote access compromise, AITM attacks
  5. Electricity – Backhoe attack, malware e.g. BlackEnergy, Industroyer
Whether or not vulnerabilities in one or more of these five ingredients are maliciously exploited depends on complex risk/reward calculations. However, execs need to know that many motivated offenders are adept at such calculations. 

Execs also need to understand that there is an entire infrastructure already in place to monetize vulnerability exploitation. They are sophisticated markets in which to: sell stolen data, stolen access, stolen credentials; and buy or rent the tools to do the stealing, ransoming, etc. (see darkweb, malware as a service, botnets, ransomware, cryptocurrency, etc.).

As I see it, unless there is a sudden, global outbreak of moral rectitude, vulnerabilities in AI systems will—if they are not capably guarded—be exploited by motivated offenders. 

Internet crime losses reported to IC3/FBI
For a sense of how capable guardianship in the digital realm is going, take a look at the rate at which losses due to Internet crime have risen in the last 10 years despite of record levels of spending on cybersecurity.

Attacks will target AI systems used for both "good" and "bad" purposes. Some offenders will try to make money attacking AI systems relied upon by hospitals, schools, companies, governments, military, etc. Other offenders will try to stop AI systems that are doing things of which they don’t approve: driving cars, taking jobs, firing weapons, educating children, making movies, exterminating humans.

Therein lies one piece of good news: we can take some comfort in the likelihood that, based on what has happened to every new digital technology in the last 40 years, AI systems will prove vulnerable to exploitation and abuse, thus reducing the chances that AI will be able to wipe us all out. Of course, it also means AI is not likely to make human life dramatically better.

Note: This is a revised version of an article that first appeared in November of 2023.

No comments:

Post a Comment