The Platform

Photo illustration by John Lyman

Despite its hype, ChatGPT fails to meet expectations as an AI tool due to its inability to learn dynamically and retain new information.

Despite the enthusiastic acclaim lavished upon ChatGPT as a groundbreaking, AI-based tool of exceptional capability, I find it falls significantly short of expectations.

After conducting several tests using my name and those of others, it is clear that ChatGPT has a long journey ahead before it can replace traditional search tools, references, and data repositories. A genuine AI tool should learn continuously from interactions, becoming more adept with each user engagement and accumulating knowledge to enhance its responses. Ideally, the more it interacts, the more it should learn and the smarter it should become.

The expectation is that the tool should store information related to user queries and be capable of integrating new data to clarify or expand on details the next time a similar question arises. Yet, my experiences suggest otherwise.

A friend of mine, more versed in AI technology due to his recent enrollment in an AI certification course, often jests that I am like Fred Flintstone when it comes to appreciating AI. This is inaccurate. In 2020, I wrote a detailed article titled “The Application of Artificial Intelligence to Asymmetric Warfare: Not a Silver Bullet,” for the American Intelligence Journal. In this work, I scrutinized the potential of AI, concluding that it is not a universal solution for every military issue. AI, like any technology, is not a magic bullet that can resolve all problems.

The overhyping of AI applications extends beyond the military to civilian contexts as well. My stance remains that, as with computing in the 1960s, the principle of “garbage in, garbage out” holds true. A robust AI tool should learn dynamically, devoid of human biases, functioning as an expert system without a hidden agenda.

Frankly, my impressions of ChatGPT are tepid, possibly because of the disproportionate hype. It is far from utilizing its potential fully within the framework of AI principles and applications. For instance, when queried about my professional background, ChatGPT responds with a general overview, failing to acknowledge my contributions to discussions on national security in the American Intelligence Journal. Similarly, when asked if I had written for the journal, it inaccurately reported no knowledge of my articles, even after I supplied relevant information.

This inconsistency is not an isolated incident. A friend who explored the tool with specific patent information experienced similar issues. Despite ChatGPT acknowledging his input about several patents initially, it failed to retain this information in subsequent queries. This raises significant concerns about the reliability of the tool and the nature of its updates. It appears that only the creators can meaningfully update ChatGPT, which suggests a potential for selective information management.

When you type in “Who is James Carlini?” ChatGPT’s response is:

“James Carlini is an author, speaker, and consultant known for his expertise in technology, telecommunications, and infrastructure. He focuses on the integration of technology into intelligent infrastructure and the implications for business and society. Carlini has written extensively on these subjects, contributing to discussions on how technological advancements intersect with economic and public policy issues. His notable works include articles on the application of artificial intelligence and infrastructure resilience.”

Here is what ChatGPT answers when asked, “Did James Carlini ever write any articles for the American Intelligence Journal?”:

“As of my last update in January 2022, there is no publicly available information suggesting that James Carlini has written articles for the American Intelligence Journal. James Carlini primarily writes and speaks on various technologies.”

I fed it all the information with links to several articles I have written for the American Intelligence Journal, including the one on AI, mentioned above, and it came back with an acknowledgment that my input was right and verified, but when questioning it again, the next day, it seems to show a clear case of AI dementia. It forgot all it was “programmed” the day before.

It did the same with Darrin Mylet’s patents on gaming. Here is a partial list of what he fed it:

1. **System and Method for Convenience Gaming**—This patent concerns a system that allows users to access various gaming applications via wireless communication devices. The system includes location determination features to prevent unauthorized transactions from certain areas (U.S. Patent 20070066401) [6†source][7†source].

2. **System and Method for Wireless Gaming System with User Profiles**—This patent describes a gaming system that maintains user profiles, which store information about user preferences, finances, activities, and habits. It allows for enhanced personalization and security in wireless gaming (U.S. Patent 20070060305) [6†source].

3. **System and Method for Providing Wireless Gaming as a Service Application**—This system enables users to access gaming applications over a wireless network, incorporating features to ensure secure transactions based on user location (U.S. Patent 20070060306) [6†source][7†source].

It acknowledged it once, with this response:

“Darrin Mylet is an inventor associated with numerous patents, primarily in the field of wireless gaming and financial transaction systems. Here are some notable patents attributed to him.”

The list appeared, but then, on the next query, ChatGPT forgot all of them.

The next day you ask ChatGPT: “Does Darrin Mylet hold any Patents?”

“As of my last update and based on available information, there isn’t specific and widely known public information regarding Darrin Mylet’s patent holdings. If you are looking for details on patents held by Darrin Mylet, you would typically need to search patent databases such as the United States Patent and Trademark Office (USPTO), the European Patent Office (EPO), or similar national patent offices where patent records are publicly accessible. These databases allow you to search by inventor name to determine if Darrin Mylet or anyone else has patents registered under their name.”

As of its last update? Evidently, you cannot feed its database with links, real patent info, etc. So, who is in charge of updating ChatGPT, and what will they conveniently leave out in the next update if it does not agree with some narrative they might be pushing?

I know he programmed his seventeen “patents” into it. It acknowledged them, but then it “forgot” all he fed it. Something is not right with that. Only “owners” can add new knowledge to it.

A truly advanced AI should verify and integrate new data autonomously, continuously expanding its knowledge base. However, ChatGPT’s current performance reveals a significant gap between its potential and its actual capabilities. It depends on periodic updates from its developers rather than learning independently from user interactions, which casts doubt on its ability to provide accurate, up-to-date information and raises questions about potential bias in the information it presents.

In essence, ChatGPT, in its present form, is an overhyped tool that does not live up to its promises. It remains constrained by the limitations of infrequent and potentially selective updates. The fundamental issue of “garbage in, garbage out” still plagues AI development, and ChatGPT is no exception. Its reliance on developer-controlled updates instead of adaptive learning undermines its utility as a reliable and dynamic information source.

While writing this, I reflected on my earlier critiques of social media tools, which also apply here: “We need to distinguish the tools from the toys.” At this point, ChatGPT is more of an AI toy, limited by outdated software development principles. It demonstrates that we still have a long way to go to develop AI tools that truly learn, adapt, and evolve from their interactions.

ChatGPT’s current limitations make it more of a curiosity than a robust, dynamic tool. The issue of information staleness due to infrequent updates further questions its viability as a modern AI application. Until it can autonomously verify and retain new information, it remains an overhyped tool with much room for improvement.

James Carlini is a strategist for mission critical networks, technology, and intelligent infrastructure. Since 1986, he has been president of Carlini and Associates. Besides being an author, keynote speaker, and strategic consultant on large mission critical networks including the planning and design for the Chicago 911 center, the Chicago Mercantile Exchange trading floor networks, and the international network for GLOBEX, he has served as an adjunct faculty member at Northwestern University.