ChatGPT and other artificial intelligence (A.I.) programs have become more and more present as companies invest in them. However, much of the promotion has gotten to the point that advertising A.I. features in products could be considered a turn-off with consumers. Many companies have already replaced human writers with ChatGPT, but there is one person that proves that ChatGPT cannot be relied on for the written word: Donald Trump.
While obviously a controversial figure that supports A.I., a recent study of Trump’s speeches is showing how large language models (LLMs) like ChatGPT fall short. Researchers from the College of International Studies, National University of Defense Technology in China chose Trump’s Republican nomination acceptance speech after he survived his assassination attempt, his post-election victory speech, his 2025 inaugural address, and his speech to Congress to test ChatGPT-4’s abilities. They chose those speeches because they were specifically filled with emotionally charged language that often used metaphors to frame political topics. They tasked ChatGPT-4 to analyze the 28,000 word total of the speeches and see if it could understand the context of the speech, identify potential metaphors, categorize them by theme, and explain their likely emotional or ideological impact.
- YouTube youtube.com
The results showed that while ChatGPT is impressive, it still couldn’t fully grasp certain aspects of human language such as metaphors. While it was able to recognize some overly used metaphors, it still struggled to properly identify or explain the context/meaning of others. In some instances, it would incorrectly classify something as a metaphor such as Israel’s Iron Dome missile defense system, which technically is a proper noun. Forbes even recognizes metaphors as something LLMs like ChatGPT cannot grasp. While ChatGPT understands information, humans can tell and understand stories and parables.
In its defense, ChatGPT over time could become more sophisticated and learn how to properly classify and use metaphors, similes, idioms, rhetorical questions, and persuasive asides effectively. However, that doesn’t appear to be happening any time soon and as language evolves it will be in a constant state of catching up. LLMs can certainly detect patterns in speech, but it lacks nuance and the ability to understand context. At this point, it is an enhanced version of the predictive text that you regularly use on your phone and shouldn’t be used as more than a tool rather than a replacement.
- YouTube youtube.com
Metaphors aren’t the only drawback for LLMs. When it comes to idioms, generative LLMs like Google A.I. will take made-up ones that don’t exist such as “you can’t lick a badger twice” and “a loose dog won’t surf” and make up meanings behind them rather than cite them as original nonsense. College students who use ChatGPT to write papers for them are getting caught by professors as the A.I.-generated term papers lack engaging and persuasive language.
Human writing isn’t just more efficient than ChatGPT, but it’s also more economically sound. Aside from companies ending up rehiring human writers to correct mistakes made by A.I. and to create more persuasive marketing as consumers become more savvy to A.I.-generated content, there is also still a lot of legal red tape regarding copyright infringement and “who owns what” when a company uses A.I. to create something. Since generative A.I. derives from original works whenever it creates something based on a prompt, the jury is literally still out on if the prompt writer owes something to the artists the A.I. is drawing from.
- YouTube youtu.be
Continuing with economics, A.I. isn’t very cost efficient at this time. Generating work through A.I. requires a significant amount of electricity to both create the prompt that it was given .Training ChatGPT-3 alone required enough electricity to power 130 American homes for one year. Each day, A.I. data centers consume the same amount of water as 4,200 Americans. While there could be experimentation into finding more energy-efficient ways for A.I. to work, the rush to make A.I. profitable for investors is taking higher priority rather than making it less impactful on consumers, the environment, and our power grid.
At this point, ChatGPT and A.I. isn’t living up to the promise that many of its champions proclaim, however if it can provide great opportunities if it is treated with nuance and time with an understanding human hand. However, if it cannot accurately 100% decipher what our leaders say, it probably shouldn’t be used as a replacement for any work that humans can and should do.