Cover Image for Definition of "hallucinations" and their risk in artificial intelligence.
Sat Oct 19 2024

Definition of "hallucinations" and their risk in artificial intelligence.

Despite the fact that language models have reached a high level of sophistication, they sometimes produce incorrect statements, known as hallucinations.

Since its launch in 2020, ChatGPT has become synonymous with artificial intelligence, standing out as an exceptional large language model (LLM) for providing well-crafted and convincing responses on a variety of topics, a functionality that resembles that of an advanced search engine. However, despite the latest version, GPT-4o, having the capacity to generate and understand text, images, and sound, these chatbots can also experience what is known as "hallucinations."

A hallucination in this context refers to the generation of false or confusing data by the chatbot, which can be potentially risky. As users place more trust in tools like ChatGPT, society tends to become more susceptible to misinformation. Hallucinations occur because LLMs are fed information obtained from the internet. Although algorithms are designed to prioritize sources based on their credibility— for example, an academic study carries more weight than a blog post—sometimes responses may include a mix of correct and incorrect information. An example of this was illustrated by ChatGPT when it provided an incorrect date regarding the coronation of King Charles III, even though the rest of the information was accurate and coherent.

Language models can also generate induced hallucinations, especially when presented with questions based on incorrect premises. For instance, ChatGPT claimed that Charles Joughin was the only survivor of the Titanic when asked about this particular topic.

While there is no foolproof way to prevent ChatGPT and other similar models from producing inaccurate responses, it is always advisable to verify the information they provide. This practice may detract from the chatbot's appeal, but it is essential in professional contexts. A useful strategy is to ask ChatGPT directly about its sources; if it has made a hallucination, it will likely recognize its error and offer a more accurate response.

Overall, ChatGPT and artificial intelligence are transforming various fields, including the way we interact with technology in our daily and professional lives.