I'm sure there are many profound things that can be said about AI, a technology for which humanity has great hopes but also intense loathing, That makes it easy to miss the point I like to make at the beginning of my classes on AI and cybersecurity: AI is chips and code fed by data, connections, and electricity.
In other words, while AI might seem to be an all knowing, almost living thing, every instance of AI is a vulnerable, hackable collection of hardware and software, useless without electricity, and prone to errors and deceptions committed by their makers, us humans.
![]() |
AI is chips and code fed by data, connections, and electricity |
In other words, while AI might seem to be an all knowing, almost living thing, every instance of AI is a vulnerable, hackable collection of hardware and software, useless without electricity, and prone to errors and deceptions committed by their makers, us humans.
Here is some of the work I have done on AI:
- Article on Medium in 2021 that highlights elevated AI risk awareness in non-white females: AI problem awareness grew in 2020, but 46% still “not aware at all” of problems with artificial intelligence
- Article on LinkedIn in 2024 about AI errors in citations (c.f. MAHA report): Is your AI lying or just hallucinating?
- Blog post in 2025 on unsolicited AI content ingestion: AI turned my 6,000 word academic paper into a 5-minute podcast, without asking
- YouTube video highlighting, with humour, the persistence of AI errors: How AI gets things wrong, repeatedly: a personal example
- Blog post exploring AI error propogation as exemplified by Google AI Overview
To be clear, I do see value in numerous software tools currently marketed and referred to as AI or AI-enabled, but I also have serious reservations about AI in general and many of the ways in which it is being developed, hyped, deployed, and used/misused/abuse.
Any further work on AI by Stephen Cobb will be listed here.