There is a lot of excitement about ChatGPT and how it allows us to interact with information and technology. I am actually excited that it now exists and still, I think it is being way overhyped. I know, SHOCKER, Silicon Valley overhypes a new technology. I have seen a bunch of things said, even by some people who should have known better, that misrepresent what ChatGPT is. I am writing this series of posts to try to clarify people’s understanding of ChatGPT, what it is great for, and sift through some of the hype. Today’s post is going to focus on cleaning up the misrepresentations that bother me the most.
By the way, I am just using ChatGPT as a stand in for any of the Large Language Models (LLMs), which is the underlying mathematical modeling chatGPT uses. Most of what I am going to say today has been said elsewhere as well, I just wanted to pull it together in my own special way. And with that, let’s get started
ChatGPT does not have human intelligence nor is it “self-aware”
Do you remember regression models from college: given some data, you find a best fit line that allows you to predict Y given X. At the end of the day, ChatGPT, and LLMs in general, are the same thing as the regression model – it’s just that ChatGPT is the largest and fanciest model we currently have to model language and information.
Under the hoods ChapGPT is a neural network model with at least 175 billion parameters that gets run over a data set, in this case a large chunk of the internet. This model, as opposed to regression, is a black box to humans. The only thing we get to observe with a black box model is the inputs and outputs with no visibility into how that output was calculated. With regression, we get to see the weights it calculates for various explanatory values, which in turn lets us make inference about the importance of those explanatory variables in explaining y.
Neural networks, like many machine learning techniques, are not made for explanatory purposes, they are geared towards prediction. In essence we say to the model – Hey, you are a neural network model, now go make connections between the data, you can do it however you like, we don’t care how you do it, we just want you to make the best predictions. If you are interested, Wolfram has a more detailed discussion of the modeling
Given the fact that ChapGPT is a statistical model, it is no more self aware than a regression equation. It has done a really good job of modeling how English works on the internet and also drawing associations between those words and larger concepts.
Had OpenAI (what does open really mean in their name?) trained their model on the collected works of Shakespeare and other Elizabethan authors, you would have a model that only talks in Elizabethan English and refers to technology from that time period. Had it been trained on text where life was not important and suicide was OK, the conversation with the New York Times reporter would have gone very differently.
It has no intentions. It is a really sophisticated model that, given some words in a prompt, it can pick the most likely words to follow from that. Given the right prompts, you can make the prediction walk down just about any path to create the sentence you want.
ChatGPT does not hallucinate or become unhinged
This point basically follows from the point above. When it makes its associations in its model, it can make the wrong association or even one that doesn’t exist. This point is really just to say, stop using anthropomorphic terms to describe what is happening in the model. Using human terms to describe ChatGPT clouds understanding of what is really happening and leads to false beliefs about the model and its output.
ChatGPT is often wrong but responds like it is 100% correct
In almost every interaction I have had with ChatGPT it has mistakes in its answer. For example, according to ChatGPT I work at WeaveWorks. I do not currently, nor have I ever, worked at WeaveWorks. I have a guess as to why ChatGPT would respond that way. I have done quite a bit of work in Kubernetes,and WeaveWork ads may have shown up on a lot of pages talking about me. In the end we won’t know why this happened (see the black box comment above).
ChatGPT only gives you “the answer”, which is its prediction about the most accurate way to respond to your request.
Even when corrected it can still give incorrect responses:
The second sentence should read:
As an AI language model, my information is based on associations I made from available data, the data may not be up to date or accurate and the model does not have perfect prediction.
Its response is actually a probabilistic response and is guaranteed to be wrong a certain percentage of the time.Not only may its information be incorrect or out of date, but the model may have made the wrong associations in the information it was fed.
Often I would see people use an excel spreadsheet with a bad formula in it, but because the answer “came from a computer” it has to be the “true” answer. My takeaway from this is: just because the answer came from a computer doesn’t mean it is right!
Being anthropomorphized, capable of only “the answer”, and only as good as the data is given is also a recipe for Disaster with humans. Humans are notorious for giving non-human technology human attributes. This in turn creates emotions in the human towards the technology and treat the conversation as if it was having it with another human.
Here’s a thought experiment. ChatGPT becomes the primary way humans glean information from the internet. Some “bad actor” is able to influence the data fed to ChatGPT, for example that Biden didn’t win the 2020 U.S. election. If you ask ChatGPT “Did Biden win the 2020 election” it will answer something like “Biden did not win the 2020 election”. Since you have a personal connection with ChatGPT and it gave you the answer, it becomes much easier to think this is the truth.
This same critique of “computer must be right” and “only as good as the data” is already recognized in Machine Learning literature, though society at large still seems to have trouble grasping it.
ChatGPT is not a replacement for search
Which leads to my final point – ChatGPT and LLMs are NOT a replacement for search. All the data fed in is turned to association probabilities, it has no way to cite sources. It can give you a list of relevant sources, but it can not tell you where it got a specific piece of information. Here is a conversation about the top 3 electric cars:
It cannot tell you where it got its information about the cars. If I try to be more specific and just ask it to show me all the reviews of the Nissan Leaf it’s still unable to do that:
In the end ChatGPT can only give you “the answer” and can not tell you how it came up with that answer. And, by virtue of being a probabilistic model, “the answer” is going to be wrong some of the time no matter how good the training data.
We still need good search engines. The fact that Bing and Google are racing to add their LLMs to their search seems like a bad idea. We need to be able to find original sources, to find a range of opinions on a topic, to show us different data – LLMs can not do that.
So Steve, what do you think ChatGPT actually is
In terms of answering questions, ChaptGPT is an encyclopedia with one of the best interfaces we have ever seen.
In ChatGPT, just like in the shelf full of the Encyclopedia Britannica I had when I was a kid, information on topics has been reduced to good summaries and presented in logical format. In this case we have removed the human authors and replaced them with a very large neural network.
Encyclopedias do not have intent or self-awareness
Encyclopedias are only as good as the knowledge used to create them
Encyclopedias can also be used to organize information around specific subject areas.
Encyclopedias can have missing or incorrect information
Encyclopedias are usually the beginning of your research on a topic. It would get you familiar enough with the subject so you could do a better job searching for more current, accurate, or specific information
I could have just as easily started this blog series talking about what gets me excited about LLMs and ChatGPT. Given the Silicon Valley Hype machine and the media’s need to create doom or controversy, I wanted to help clear up some of the misconceptions and ideas being thrown around.
The part that has me excited is the “best interface we have ever seen.” I’ll talk about some of those things in my next post.