Identity fraud attacks using AI deceive biometric security systems.
Synthetic selfies generated entirely by deepfake could bypass traditional identity verification procedures (KYC).
The recent Global Identity Fraud Report by AU10TIX has highlighted a concerning increase in identity fraud, mainly driven by extensive use of AI-based attacks. Between July and September 2024, an analysis of millions of transactions revealed that digital platforms, particularly in social media, payments, and cryptocurrencies, are facing unprecedented challenges.
Fraud tactics have significantly evolved, shifting from simple document forgery to the creation of more sophisticated synthetic identities, deepfake images, and automated bots that manage to evade traditional verification systems. During the lead-up to the 2024 U.S. presidential election, social media platforms experienced a notable increase in automated attacks, accounting for 28% of all fraud attempts in the third quarter of 2024, compared to just 3% in the first quarter.
These attacks, which focus on disinformation and public opinion manipulation, utilize advanced elements of Generative AI (GenAI) to avoid detection. GenAI-supported attacks began to proliferate in March 2024, peaking in September, and are considered to have a significant impact on public perception by spreading false narratives and inflammatory content.
One of the most alarming revelations from the report is the emergence of 100% deepfake synthetic selfies, hyper-realistic images designed to mimic authentic facial features in order to evade checks. Although selfies were previously regarded as a reliable method for biometric authentication, the technology required to convincingly forge them is now within the reach of criminals.
AU10TIX notes that these synthetic selfies present a unique challenge to traditional KYC (Know Your Customer) procedures. This suggests that organizations relying solely on facial recognition technology will need to reevaluate and strengthen their detection methods.
Furthermore, fraud schemes are using artificial intelligence to generate variations of synthetic identities by employing “image template” attacks. This involves manipulating a single identification template to create multiple unique identities, with random photographic elements and other personal identifiers. This enables attackers to rapidly establish fraudulent accounts across various platforms, leveraging AI’s ability to scale the creation of synthetic identities.
In the payments sector, the fraud rate decreased from 52% in the second quarter to 39% in the third, and AU10TIX attributes this improvement to increased regulatory oversight and law enforcement interventions. However, despite the reduction in direct attacks, the payments industry remains the most targeted, with many criminals, deterred by enhanced security, redirecting their focus towards the cryptocurrency market, which accounted for 31% of all attacks in the third quarter.
AU10TIX advises organizations to abandon traditional document-based verification methods. A key recommendation is to adopt behavior-based detection systems that go beyond standard identity checks. By analyzing patterns in user behavior, such as login routines and traffic sources, companies can identify anomalies that may indicate potential fraudulent activity.
Dan Yerushalmi, CEO of AU10TIX, points out that criminals are rapidly evolving, leveraging artificial intelligence to scale and execute their attacks, especially in the social media and payments sectors. While companies are using AI to enhance security, criminals are also harnessing the same technology to create synthetic selfies and forged documents, making detection nearly impossible.