Cover Image for OpenAI did not deliver the exclusion tool it had promised for 2025.
Thu Jan 02 2025

OpenAI did not deliver the exclusion tool it had promised for 2025.

In May, OpenAI announced that it was working on a tool that would allow creators to indicate how they want their works to be included or excluded in the training of its artificial intelligence.

Seven months ago, OpenAI announced that it was working on a tool to allow creators to decide how their works would be included or excluded from the training data of its artificial intelligence. This tool, called Media Manager, would have the ability to "identify copyrighted texts, images, audio, and video," according to the company, and reflect creators' preferences "across multiple sources." The goal of Media Manager was to address the most severe criticisms facing OpenAI and protect itself from potential legal challenges related to intellectual property.

However, sources indicate that this tool has not been considered a priority within the company. A former OpenAI employee remarked that they don’t recall anyone working on it. Other individuals close to the company mentioned that, although there had been discussions about the project in the past, there have not been any recent updates. This was corroborated by the transition of Fred von Lohmann, a member of OpenAI's legal team who was working on Media Manager, to a part-time consulting role in October. OpenAI has not provided any updates on the progress of Media Manager and has missed a self-imposed deadline to have the tool ready "by 2025."

AI models, like those developed by OpenAI, learn patterns from datasets to make predictions. This capability allows them to generate text and video, making them powerful tools. However, this ability also brings problems, as under certain prompts, the models can reproduce data almost exactly, which has frustrated many creators who see their works included in AI training without their consent. As a result, some have decided to take legal action against OpenAI, accusing the company of having used their works illegally.

Although OpenAI has attempted to implement ways for creators to "opt out" of its training, these solutions have been criticized for their ineffectiveness. The proposals included a form for artists to indicate their works for removal, but the methods are perceived as inadequate. Media Manager was presented as a comprehensive solution that would use "cutting-edge research in machine learning" to allow creators to communicate to OpenAI which works belong to them.

While the launch of this tool could be appealing, legal experts are not confident that it will resolve creators' concerns or the legal issues surrounding the use of intellectual property. Furthermore, exclusion tools do not always account for the transformations that works may undergo or the circumstances under which third-party platforms host copies of creators' content.

Despite the uncertainty, OpenAI has continued to implement filters to prevent the reproduction of training examples, claiming that its models create transformative works. The company argues that it would be impossible to train competitive AI models without using copyrighted materials. If the courts were to rule in favor of OpenAI, Media Manager might not have any significant legal purpose, as the company seems willing to reevaluate its exit strategy.