Cover Image for OpenAI uses its own models to combat electoral interference.
Sat Oct 12 2024

OpenAI uses its own models to combat electoral interference.

OpenAI takes strong action against cybercrime campaigns that use its systems to manipulate elections globally. Learn the details in the full report.

OpenAI has published a report revealing that during the year 2024, it has blocked more than 20 dishonest operations and networks in different parts of the world. These operations varied in objectives, scale, and focus, and were used to generate malware, as well as to draft fake media accounts, fictional biographies, and articles on various websites. The organization conducted a detailed analysis of the activities it halted and shared key information regarding them.

According to the report, threat actors continue to adapt and experiment with OpenAI's models. However, so far, no evidence has been found indicating that these actions have led to significant advances in the creation of novel malware or in building viral audiences. This aspect is crucial, as 2024 is an election year in several countries, including the United States, Rwanda, India, and the European Union. For instance, in July, OpenAI blocked several accounts that generated comments about the elections in Rwanda, which were published by various accounts on X (formerly Twitter). It is encouraging that OpenAI states that threat actors have not made considerable progress with these campaigns.

Another notable achievement highlighted by the organization was the disruption of the activities of a China-based threat group known as "SweetSpecter," which attempted to conduct spear-phishing attacks targeting corporate and personal email addresses of OpenAI employees. In August, Microsoft revealed a set of domains attributed to an Iranian covert influence operation called “STORM-2035”. Based on this information, OpenAI investigated, disrupted, and reported a set of associated activities in ChatGPT. Additionally, OpenAI indicated that the posts generated by its models did not manage to attract much attention, receiving few or no comments, “likes,” or shares.

OpenAI is committed to continuing to anticipate how malicious actors use advanced models for harmful purposes and to take the necessary actions to halt these activities.

In the realm of its models, OpenAI has announced the launch of a reasoning model called "o1," previously known as "Project Strawberry," after months of speculation. Along with this, a "mini" version has also been released, promising faster and more responsive interactions, although with less access to a wide knowledge base. The "o1" model is the first in OpenAI's reasoning line, designed to employ human-like deduction to answer complex questions in areas such as science, programming, and mathematics, more quickly than humans.

OpenAI has surpassed one million subscribers, who pay around $20 monthly (or more for Teams and Enterprise). However, this does not seem to be enough to maintain the company's financial viability, amid reports that it could raise subscription prices to as much as $2,000 per month to access its latest models, in the context of rumors about a potential bankruptcy.

Additionally, the Open Source Initiative (OSI) has updated its definition of what constitutes "open-source AI." This modification could exclude models from large companies like Meta and Google, emphasizing the need to provide developers and users of AI with the same fundamental principles that have benefited open-source software communities.