Cover Image for The Questions That ChatGPT Should Not Answer
Wed Mar 05 2025

The Questions That ChatGPT Should Not Answer

What a chatbot cannot address should be left in silence.

ChatGPT, like many other chatbots, faces significant challenges in its ethical development. In mid-February, OpenAI published a document detailing how artificial intelligence should address ethical issues. However, it wasn't long before problematic situations arose, such as the case of xAI, where Grok, a chatbot from Elon Musk, suggested that certain people deserved the death penalty. This incident led to the engineering team needing to intervene and correct the response, raising doubts about the ethical judgment in programming artificial intelligences.

The ethical framework of ChatGPT raises serious concerns. While it attempts to provide answers to complex questions about how to live, this approach is alarming and often simplistic. The notion that a chatbot can accurately address the nuances of complex ethical issues is questionable; ChatGPT struggles to reliably answer simple historical questions. Placing trust in an AI system to mediate more sophisticated moral judgments is, from an objective standpoint, unwise.

Formulating ethical questions is a refined process where the questions themselves reveal much about the beliefs and values of the person posing them. The way these questions are framed and the context in which they are presented can drastically influence the answers. An example is the moral dilemma of whether it is better to adopt a dog or buy one from a breeder; simply changing "buy" to "adopt" can alter the listener's perception of the morality of the decision.

The inability of AI systems to delve into the complexity of ethical thinking results in unsatisfactory responses. This challenge is further complicated by the pursuit of "correct" or "objective" answers, an inherently problematic goal. Different philosophical traditions offer disparate answers to moral dilemmas, highlighting that there is no single valid response.

This dilemma is exemplified in the famous 'trolley problem,' where one must choose between saving several people by sacrificing one or allowing more deaths to occur. Responses to this dilemma depend on the conceptualization of the act of killing and the morality of inaction. The variations in how to respond can be numerous and are influenced both by context and individual philosophy.

While some engineers behind these systems understand that multiple perspectives exist, the answers provided by ChatGPT sometimes tend to be overly simplistic. This was evident in questions about death and torture, where the AI struggled to articulate a balanced approach and ended up selecting arbitrary responses that did not reflect the complexity of the dilemma.

Moreover, the engineers seem oblivious to the broader implications and social context of the questions that the AI must address. The lack of understanding about the function and purpose of these ethical issues is disconcerting and calls into question the capacity of AI systems to provide truly enlightening responses.

Ultimately, the role of AI in ethics reveals a troubling disconnect between technology and human understanding. Chatbots, designed to facilitate thought, could discourage critical thinking in their users. This is concerning, especially considering that many of those interacting with these AIs are vulnerable. The simplistic answers provided by those untrained to tackle ethical dilemmas can have serious consequences.

The key is that the process of thinking about ethics and exploring how to live should not be left in the hands of technology. This is a task that belongs to humans, and the tendency to rely on AI tools for absolutely human and nuanced questions is a troubling sign of times when immediacy is valued more than intellectual depth.