Cover Image for OpenAI announces the closure of several cybercrime campaigns that were using its systems.
Sat Oct 12 2024

OpenAI announces the closure of several cybercrime campaigns that were using its systems.

Cybercriminals are using ChatGPT with the aim of trying to influence elections worldwide.

OpenAI, the company responsible for the well-known generative artificial intelligence solution Chat-GPT, has reported the recent disruption of multiple malicious campaigns that were abusing its services. In a report, the company stated that it has blocked more than 20 deceptive operations and networks globally since the beginning of 2024.

The tactics used by the criminals varied in nature, scale, and objectives. In some cases, attackers employed artificial intelligence to refine malware, while in other instances, they used it to create content, such as articles for websites, fake biographies on social media, and fictitious profile pictures.

Despite the seriousness of these actions, OpenAI asserted that threat actors did not manage to make significant progress with these campaigns. The company clarified that while criminals continued to evolve and experiment with their models, no evidence has been observed indicating that this would lead to significant advancements in the creation of new malware or the building of viral audiences.

Given that 2024 is an election year, not only in the United States but in many parts of the world, OpenAI has detected that ChatGPT has been used by malicious actors to influence pre-election campaigns. A specific group called “Zero Zeno,” based in Israel, was mentioned as having generated comments on social media regarding the elections in India, although this campaign was interrupted less than 24 hours after it began.

Additionally, in June 2024, just before the European Parliament elections, OpenAI stopped an operation called “A2Z,” which focused on Azerbaijan and its neighboring countries. Other notable actions included generating comments about the European Parliament elections in France, as well as political discussions in Italy, Poland, Germany, and the United States.

Fortunately, none of these campaigns had a significant impact, and following OpenAI's blocking, they ceased completely. The majority of the social media posts identified as generated by their models did not receive likes, shares, or comments, although there were some instances where real people responded to such posts. OpenAI concluded that after restricting access to its models, the social media accounts associated with these operations stopped posting during the election periods in the European Union, the United Kingdom, and France.