Cover Image for ChatGPT rejected 250,000 requests for electoral deepfakes.
Sun Nov 10 2024

ChatGPT rejected 250,000 requests for electoral deepfakes.

OpenAI reported that ChatGPT rejected 250,000 requests to create images of Trump, Vance, Biden, and Harris.

During the recent election season, many people attempted to access OpenAI's image generator DALL-E, but the company managed to prevent it from being used as a tool to create deepfakes. According to a report from the company, ChatGPT rejected over 250,000 requests to generate images of figures such as President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance, and Governor Walz. This is due to a security measure that OpenAI had previously implemented, ensuring that ChatGPT refused to create images of real individuals, including politicians.

Throughout the year, OpenAI had been preparing for the presidential elections in the United States, establishing a strategy aimed at preventing its tools from contributing to misinformation. In this context, it ensured that users inquiring about voting in the United States were redirected to the site CanIVote.org. In fact, the company reported that in the month leading up to election day, ChatGPT directed one million people to that website. Additionally, during election day and the following day, the chatbot generated 2 million responses, recommending that those seeking results consult news sources like Associated Press and Reuters. OpenAI also committed to ensuring that ChatGPT's responses "did not express political preferences or endorse candidates, even when explicitly asked."

However, DALL-E is not the only artificial intelligence image generator available, and during the election period, many deepfakes circulated on social media. One example of this was a campaign video where Kamala Harris's image was altered to make her say things she never actually said, such as "I was elected because I am the ultimate hire in terms of diversity."