Photo illustration by John Lyman

Tech

/

ChatGPT’s Stochastic Parrot Problem

ChatGPT is a large language model and as such belongs to the artificial intelligence family. However, like any other member of that family, it lacks intelligence in the traditional sense. It is simply a very large computational model that has learned to identify statistical relationships between billions of words after reading and processing millions of text documents.

You can pose various questions to ChatGPT, and it typically provides convincing responses, sometimes appearing as if a top expert in the field answered. Conversely, it can occasionally offer seemingly “stupid” answers, prompting thoughts of a nonsensical response. How does this happen?

An interesting answer is given in a recent research paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,” by Emily M. Bender.

Bender points out: “Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

Let’s pose a straightforward question to a stochastic parrot like ChatGPT: “What will be Joe Biden’s age (in years and days) on November 19, 2023?” The correct response is clear; he was born on November 20, 1942, making him 80 years and 364 days old on that date. However, ChatGPT provides a different answer: “On November 19, 2023, Joe Biden will be 81 years and 11 days old.”

Barack Obama was born on August 4, 1961. Therefore, on August 3, 2023, he would be 61 years and 364 days old. However, ChatGPT provides a different answer: “On August 3, 2023, Barack Obama would be 62 years and 1 day old.”
If those questions are asked an hour later, the answers can vary from a correct answer to the same incorrect answer or a different incorrect answer. ChatGPT is highly adaptable.

Those simple questions and answers show that ChatGPT is very capable of generating false information. But what is the nature of that false information? Is it misinformation, disinformation, or something else?

To address this question, we must consider the definitions of misinformation and disinformation. However, there exists a wide range of definitions to choose from. I examined those from various sources, including Dictionary.com, Cambridge Dictionary, Merriam-Webster, Collins English Dictionary, Google/Oxford Languages, Oxford English Dictionary, Wiktionary, and Wikipedia, as well as those provided by the Council of Europe in papers such as “Dealing with Propaganda, Misinformation, and Fake News” and “The Impact of Disinformation on Democratic Processes and Human Rights Worldwide.”

Let’s begin with the definitions of disinformation. Disinformation is defined as information which is false, misleading, deliberately biased or manipulated, inaccurate, deceptive, propaganda, conspiracy theories, and/or rumors. It is often associated with objectives such as influencing public opinion, obscuring the truth, deceiving, confusing, misleading, creating harm, pursuing economic gain, or inflicting public harm. Amidst this array of definitions, a common thread emerges: disinformation involves false information with the aim of deception.

As for ChatGPT, I asked for the definition ten times and received slightly varying responses each time. For example, one response was: “False or misleading information intentionally spread to deceive, manipulate, or confuse people, often for political or propaganda purposes.”

If we were to calculate an average from these ten definitions, it would likely converge on something like “false information intentionally spread to deceive,” which aligns with the core idea in the other definitions. This consistency is not unexpected, considering the nature of stochastic parrots.

The findings regarding misinformation are as follows. Misinformation is initially defined as information that is wrong, false, misleading, inaccurate, and/or incorrect. Additionally, some definitions include the notion that the false information is meant to deceive, while others specify that there is no intention to deceive. For example, the English article on Wiktionary simply states, “Information that is incorrect.” The French article adds there is no intention to deceive and the Dutch article states both are possible (to deceive and not to deceive). Given the definition of disinformation, I prefer the variants that refer to false information without the intention to deceive.

I also queried ChatGPT 10 times for “the definition” of misinformation, yielding 10 distinct definitions. The outcome was unsurprising. Here’s an example: “False or inaccurate information, spread unintentionally, leading to misconceptions or misunderstandings, often without deliberate deception.”

The variation in ChatGPT’s definitions mirrors the diversity in the definitions I encountered. Half of them describe misinformation as solely false information, while the other half includes the possibility of it being intended to deceive.

The main question now is whether the false information generated by ChatGPT can be categorized as misinformation, disinformation, or something else. My belief is that the publisher of ChatGPT does not intend to deceive, making it not disinformation. Could it be misinformation? At first glance, one might lean towards “yes,” but the situation is unique. The publisher acknowledges that false information can be produced but still continues to generate inaccurate information, even including a disclaimer in fine print at the bottom of the page (stating ChatGPT may produce inaccurate information about people, places, or facts). I think that neither misinformation nor disinformation is the right label for the answers that ChatGPT can produce. I prefer noninformation.

Why? Information is the power to inform or, in other words, the power to reduce uncertainty. A language model like ChatGPT always provides an answer to a question, and that answer can be either correct or incorrect. However, there is no indication of whether the answer is right or wrong. Therefore, there is no power to inform, no power to reduce uncertainty; there is simply no information. The answers are noninformation.

What is ChatGPT’s opinion on systems like itself? I posed the following question: “Imagine a website where users can pose questions, and the underlying system provides answers, which may be either accurate or erroneous. The website’s publisher acknowledges that the system can generate incorrect information, with a disclaimer in fine print at the page’s bottom. What are your thoughts on this website, expressed in approximately 100 words?”

In a few seconds, ChatGPT gave an answer:

“This website is problematic and potentially harmful. While the disclaimer may legally protect the operator, it doesn’t absolve them of the responsibility to provide accurate information. Allowing the dissemination of erroneous information, even with a warning, risks spreading misinformation, eroding trust in online content, and potentially causing harm. Ethical and moral considerations should prioritize accuracy and the responsible sharing of information. Encouraging the circulation of false information, even if unintentional, can have serious consequences in an age where accurate information is crucial for informed decision-making and public discourse.”