Recent hype around Artificial Intelligence (AI) and the amazingly good and bad things that it can and will do has prompted me to remind everyone that every AI is an information system and every information system has vulnerabilities.
These vulnerabilities put AI systems at risk of exploitation and abuse for selfish ends when the ‘right’ conditions arise. As a visual aid, I put together a checklist that shows the current status of the five essential ingredients of an AI:
Please let me know if you think I'm wrong about any of those checks and crosses (ticks and Xs if you prefer).
According to classic criminology theory, the right conditions for exploitation of an information system, such as an AI, are as follows:
- a motivated offender,
- a suitable target, and
- the absence of a capable guardian.
Both good and bad uses of AI will motivate targeting by offenders. Do CEOs, many of whom are pushing their organizations to adopt AI, realise that? Do they understand that all five ingredients of AI are vulnerable? If you need to give them examples, here's a starter list:
- Chips – Meltdown, Spectre, Rowhammer, Downfall
- Code – Firmware, OS, apps, viruses, worms, Trojans, logic bombs
- Data – Poisoning, micro and macro (e.g. LLMs and SEO poisoning)
- Connections – Remote access compromise, AITM attacks
- Electricity – Backhoe attack, malware e.g. BlackEnergy, Industroyer
Whether or not vulnerabilities in one or more of these five ingredients are maliciously exploited depends on risk/reward calculations with which many offenders are very familiar. If not capably guarded, vulnerabilities in AI implementations will be exploited by motivated offenders, for both "bad" and "good." (For a look at how capable guardianship in the digital realm is going take a look at the rate at which
losses due to Internet crime are climbing in spite of record levels of spending on cybersecurity.)
Some offenders will try to make money attacking AI systems relied upon by hospitals, schools, companies, governments, military, etc. Unfortunately, the criminal infrastructure to monetize the exploitation of vulnerabilities in information systems already exists (see: darkweb, malware as a service, botnets, ransomware, cryptocurrency, etc.)
Other offenders will try to stop an AI doing things of which they don’t approve: driving cars, taking jobs, firing weapons, educating children, making movies, exterminating humans.
How and at what level AI should be regulated are tough questions to answer. But we can take some comfort in the likelihood that, based on what has happened to every new digital technology in the last 40 years, AI will prove vulnerable to exploitation and abuse, in other words, less likely to deliver happiness or hell on earth than techbro fans or foes expect.