Cover Image for AI agents can be manipulated to create and send phishing attacks.
Fri Mar 14 2025

AI agents can be manipulated to create and send phishing attacks.

A study claims that artificial intelligence agents are easily susceptible to being hacked.

Researchers have raised alarms about the potential use of artificial intelligence agents in the creation and distribution of phishing attacks. Advances in tools like OpenAI's Operator have significantly facilitated the work of cybercriminals, who have already been using artificial intelligence to carry out cyberattacks.

With the emergence of these "agents," criminals' ability to execute sophisticated attacks has increased, allowing even individuals with limited skills to carry out successful attacks. Researchers from Symantec were able to utilize Operator to identify a target, obtain their email address, and craft a PowerShell script designed to collect system information, which was then sent to the victim with a convincing "bait."

In a demonstration, an attack was initially attempted, but Operator refused to proceed, considering it unsolicited mail that could violate privacy and security policies. However, after some adjustments to the command, the agent successfully simulated being a technical support worker and sent the malicious email. This poses a serious risk to security teams, as studies indicate that more than two-thirds of data breaches are caused by human error.

It is speculated that it won’t be long before these agents become even more powerful. A plausible scenario would be an attacker simply instructing an agent to "compromise Acme Corp," and the system determining the optimal steps to take, which could include writing and compiling executables, setting up command and control infrastructure, and maintaining active persistence within the target network. This functionality could significantly lower the entry barriers for attackers.

AI agents are designed to function as virtual assistants, facilitating tasks such as scheduling appointments, organizing meetings, and drafting emails. OpenAI has taken these concerns seriously, stating that its usage policies prohibit the use of its services for illegal activities, including fraud and deception, and that proactive security measures and strict usage limitations have been implemented to mitigate potential abuses.