
Microsoft identifies the cybercriminals responsible for the creation of explicit deepfakes.
Microsoft has identified the members of the "Azure Abuse Enterprise."
A lawsuit related to the criminal group Storm-2139 has been updated, revealing four of its members identified by Microsoft. This group is accused of creating illegal deepfakes using leaked API keys from various clients of the company to access the Azure OpenAI service. The organization employed malicious tools that allowed threat actors to bypass the safeguards of the generative AI model, generating harmful and illegal content.
The group, known as “Azure Abuse Enterprise,” is considered part of a global cybercriminal network that Microsoft tracks under the name Storm-2139. The individuals involved in the lawsuit are Arian Yadegarnia, also known as "Fiz" from Iran, Alan Krysiak nicknamed "Drago" from the UK, Ricky Yuen alias “cg-dot” from Hong Kong, and Phát Phùng Tấn known as “Asakuri” from Vietnam.
Microsoft's Digital Crimes Unit (DCU) initially filed the lawsuit against ten “John Does” for violating U.S. law and the acceptable use policy of generative AI services; it has now been updated to identify the specific individuals.
In this new phase, Microsoft detailed in the lawsuit that abuses of the API keys for the Azure OpenAI service had been found. As a result, a repository on GitHub was disabled, and a court order was obtained to seize a domain linked to the criminal activities.
The lawsuit claims that the group members are categorized as creators, providers, and users. The defendants allegedly used customer credentials obtained from public sources (likely through data leaks) to illegally access accounts in the generative AI services. They subsequently altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illegal content, including non-consensual intimate images of celebrities and other explicit sexual content.