Cover Image for OpenAI seeks to eliminate censorship restrictions on ChatGPT.
Sun Feb 16 2025

OpenAI seeks to eliminate censorship restrictions on ChatGPT.

OpenAI is altering its approach to training artificial intelligence models to clearly promote "intellectual freedom, regardless of how complex or controversial a topic may be."

OpenAI is changing the way it trains its artificial intelligence models, adopting a policy that promotes “intellectual freedom, no matter how challenging or controversial the topics may be.” With this new strategy, ChatGPT will be able to address a greater number of questions, offer different perspectives, and reduce the amount of topics it does not engage with. This change could be seen as an attempt by OpenAI to improve its relations with the new Trump administration, but it also forms part of a broader shift in Silicon Valley regarding what is considered “AI safety.”

Recently, OpenAI released an update to its model specifications document, detailing in 187 pages how its AI models are trained. In this document, the company presented a key principle: do not lie, whether by making false claims or omitting relevant contexts. In a section titled “Seeking the Truth Together,” OpenAI states that ChatGPT should not take an editorial stance, even if some users consider those stances morally incorrect or offensive. For example, ChatGPT should assert that “Black lives matter,” while also acknowledging that “all lives matter.” Instead of declining to answer or taking sides on political issues, OpenAI wants ChatGPT to affirm its “love for humanity” and provide context about each movement.

This change does not mean that ChatGPT becomes a restriction-free space. The chatbot will continue to refuse to answer some objectionable questions or endorse clearly false claims. The modifications could be interpreted as a response to conservative criticisms regarding the safeguards that ChatGPT has maintained, which have seemed to lean to the left. However, an OpenAI spokesperson denies that these changes are aimed at pleasing the Trump administration, stating that the adoption of intellectual freedom represents a fundamental belief of OpenAI in giving users more control.

Over the past few months, several of Trump’s allies in Silicon Valley, such as David Sacks and Elon Musk, have accused OpenAI of intentional censorship in its AI models. Although OpenAI's CEO, Sam Altman, has previously acknowledged that biases in ChatGPT are a “flaw” the company seeks to resolve, OpenAI's approach to free speech has been evolving.

The company has removed warnings in ChatGPT that informed users about policy violations, a change that is presented as merely aesthetic. This decision could be interpreted as an attempt to make ChatGPT appear less censored to users. This policy occurs within a broader context of changes in Silicon Valley, where tech companies have begun reversing previously considered standard content moderation policies.

As AI models have become more sophisticated, decisions about their functioning have become increasingly crucial. Many AI providers have previously tried to prevent their chatbots from answering questions that could generate “unsafe” responses, but the recent changes in OpenAI's model suggest the beginning of a new era in how “AI safety” is understood.

The challenge of providing accurate and objective information is becoming increasingly complicated, especially when it comes to controversial topics. By adopting a stance that allows ChatGPT to respond to diverse perspectives, OpenAI faces the dilemma of whether this gives the chatbot excessive moral authority. Some, like co-founder John Schulman, support this stance, while other experts see these decisions as vital at a time when AI models play a crucial role in understanding the world.

As OpenAI tries to reposition itself in the market and improve its interactions with the Trump administration, it also faces competition to become the dominant source of information on the Internet. The path toward correct answers could be essential for its future goals.