Cover Image for "Slopsquatting Attacks Use AI-Generated Names Similar to Popular Libraries to Spread Malware."
Tue Apr 15 2025

"Slopsquatting Attacks Use AI-Generated Names Similar to Popular Libraries to Spread Malware."

Artificial intelligence does not always make mistakes when confusing open-source packages, and these errors can be analyzed and understood.

Experts have warned about a new method that could be used to commit cybercrimes through generative artificial intelligence (GenAI), known as 'slopsquatting.' This phenomenon occurs when GenAI tools, such as ChatGPT or Copilot, generate misinformation, which in the programming world can translate into names of open-source software packages that never existed.

Sarah Gooding, a specialist at Socket, points out that many developers rely on GenAI to write code. This technology can create lines of code on its own or suggest different packages to developers who wish to integrate them into their projects. However, AI does not always generate original names or packages. A study revealed that, by repeating the same prompt that triggers the "hallucination" ten times, 43% of the fictitious packages appeared again each time, while 39% never reappeared. Overall, 58% of the hallucinated packages were repeated at least once in ten attempts, suggesting that these hallucinations are not merely random noise but predictable responses to certain stimuli.

Although this possibility is theoretical for now, it is posited that cybercriminals could identify the various packages generated by the AI and register them on open-source platforms. Thus, when a developer searches for a package on platforms like GitHub or PyPI, they might find and download a malicious one, unaware of its dangers.

Fortunately, no confirmed cases of slopsquatting have been documented so far; however, experts suggest that it is only a matter of time before they occur. Given that hallucinated names are likely to be identifiable, it is probable that security researchers will uncover them in the near future. The best way to protect oneself against these potential attacks is to be cautious when accepting suggestions, whether from humans or artificial intelligence.