Wednesday, April 12, 2023

What is ChatGPT and how can AI get things wrong: an annotated example using jackware

An example of ChatGPT giving a wrong answer
You can't trust what ChatGPT says
ChatGPT is, as you probably know, a computer system that uses artificial intelligence (AI) to answer questions. Sometimes the answers it gives are wrong, and that's the short version of this article. The long version explains more about what ChatGPT is, with a detailed look at an example of how wrong it can be. 

Here's how Mashable describes ChatGPT in Everything you need to know about ChatGPT: "in essence, a simple online artificial intelligence chatbot created by OpenAI in December 2022." Technically speaking, ChatGPT describes itself as "a language model developed by OpenAI, based on the GPT (Generative Pre-trained Transformer) architecture...designed to understand and respond to natural language queries and generate human-like text in response." 

If you check out ChatGPT online it can come across as a chatty version of a search engine, but it can do things that search engines don't, such as put together plausible explanations of phenomena, as opposed to simply listing search results relating to them. For example, suppose you encounter the word jackware and wonder what it means. You could put the word into a search box and get a bunch of responses, like this:
If you are lucky, one or more of the search results will give you an idea of what the search term means. In this case we can see that "The goal of a jackware attack is to lock up the hijacked device until you pay" (which happens to be correct). 

But what if you're writing an article or essay about jackware and want to know who came up with the term and when it was coined? I couldn't find that information in the search results for jackware, and looking for it took time. In my experience, search engines often fail to answer questions like this, as you can see from the following example. When asked who coined the term jackware, Google provides one link to a story about jackware, but all the rest of the results—many thousands of them—assume that jackware is just a typo for Jaguar, as in the car, not the animal:
Screenshot of Google assume jackware is a typo for jaguar
Personally, I find the jackware/jaguar assumption very amusing—for reasons I will explain later—but first, let's see how ChatGPT gets on when asked the same question: Who coined the term jackware? Here's the answer ChatGPT provides:

The term "jackware" was coined by two researchers, Stefan Savage and Tadayoshi Kohno, in a 2016 paper titled "Experimental Security Analysis of a Modern Automobile". In the paper, Savage and Kohno described a proof-of-concept attack in which they were able to take control of a car's electronic systems using a laptop computer. They used the term "jackware" to describe a new class of malware that could be used to take control of vehicles in a similar way to how ransomware is used to take control of computer systems.

Frankly, at first glance this looks like is a very impressive answer: it names names and cites an academic source, while providing a helpful description of jackware, all articulated in grammatically correct English. It's way more helpful than the output of a plain old search. You can see why so many people get so excited about artificial intelligence. 

But sadly, several key facts in ChatGPT's answer are just plain wrong, so wrong that you could be in serious trouble with your teacher-editor-readers if you pasted that paragraph of ChatGPT output into a piece of work. For a start, anyone familiar with the subject matter would know that you hadn't read that paper.

The academic paper to which ChaGPT refers was published in 2010, not 2016. You can see that from this listing of the paper. This is not just a pedantic quibble; the named paper is legendary in the world of automotive cybersecurity, partly because it was published way back in 2010. It documents groundbreaking work done by Savage et al. in the 2000s, way before the flashy Jeep hack of 2015 by Miller and Valasek.

More blatantly erroneous is the identification of this 2010 paper and its authors as the source of the term jackware. Simply put, the paper does not contain the word jackware. In fact, the person who coined the term jackware to describe malicious code used to take over vehicles, was me, Stephen Cobb, and I did that in May of 2016, on this blog, in a post titled: Jackware: coming soon to a car or truck near you? 

In July of 2016, I penned Jackware: When connected cars meet ransomware for We Live Security, the award-winning global cybersecurity blog. As further evidence, I present exhibit A, which shows how you can iterative time-constrained searches to identify when something first appears. Constraining the search to the years 1998 to 2015, we see that no relevant mention of jackware was found prior to 2016:Apparently, jackware had been used as a collective noun for leather mugs, but there are no software-related search results before 2016. Next you can see that, when the search is expanded to include 2016, the We Live Security article tops the results:

So how did ChatGPT get things so wrong? The simple answer is that ChatGPT doesn't know what it's talking about. What it does know is how to string relevant words and numbers together in a plausible way. Stefan Savage is definitely relevant to car hacking. The year 2016 is relevant because that's when jackware was coined. And the research paper that ChatGPT referenced does contain numerous instances of the word jack. Why? Because the researchers wisely tested their automotive computer hacks on cars that were on jack stands.

To be clear, ChatGPT is not programmed to use a range of tools to make sure it is giving you the right answer. For example, it didn't perform an iterative time-constrained online search like the one I did in order to find the first use of a new term. 

Hopefully, this example will help people see what I think is a massive gap between the bold claims made for artificial intelligence and the plain fact that AI is not yet intelligent in a way that equates to human intelligence. That means you cannot rely on ChatGPT to give you the right answer to your questions. 

So what happens if we do get to a point where people rely—wisely or not—on AI? That's when AI will be maliciously targeted and abused by criminals, just like every other computer system, something I have written about here.

Ironically, the vulnerability of AI to abuse can be both a comfort to those who fear AI will exterminate humans, and a nightmare for those who dream of a blissful future powered by AI. In my opinion, the outlook for AI, at least for the next few decades, is likely to be a continuation of the enthusiasm-disillusionment cycle, with more AI winters to come.

--------------^-------------
 

Note 1: For more on those AI dreams and fears, I should first point out that they are based on expectations that the capabilities of AI will evolve from their current level to a far more powerful technology referred to as Artificial General Intelligence or AGI. For perspective on this, I recommend listening to "Eugenics and the Promise of Utopia through Artificial General Intelligence" by two of my Twitter friends, @timnitGebru and @xriskology. This is a good introduction the relationship between AI development and a bundle of beliefs/ideals/ideas known as TESCREAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism.

Note 2: When I first saw Google assume jackware was a typo for Jaguar I laughed out loud because I was born and raised in Coventry, England, the birthplace of Jaguar cars. In 2019, when my mum, who lives in Coventry, turned 90, Chey and I moved here, and that's where I am writing this. Jaguars are a common sight in our neighbourhood, not because it's a posh part of the city, but because a lot of folks around here work at Jaguar and have company cars.