AI Bypasses Biometric Security In $138.5 Million Financial Fraud Risk

by · Forbes
Security researchers uncover massive AI-driven biometric security bypassgetty

When a prominent Indonesian financial institution reported a deepfake fraud incident impacting its mobile application, threat intelligence specialists at Group-IB set out to determine exactly what had happened. Despite this large organization having multiple layers of security, as any regulated industry would require, including defenses against rooting, jailbreaking and the exploitation of its mobile app, it fell victim to a deepfake attack. Despite having dedicated mobile app security protections such as anti-emulation, anti-virtual environments and anti-hooking mechanisms, the institution still fell victim to a deepfake attack. I’ve made a point of repeating this because, like many organizations within and outside of the finance sector, digital identity verification incorporating facial recognition and liveness detection was enabled as a secondary verification layer. This report, this warning, shows just how easy it is becoming for threat actors to bypass what were considered state-of-the-art security protections until very recently.

Here’s How AI is Bypassing Biometric Security in Financial Institutions

The Group-IB fraud investigation team was asked to help investigate an unnamed but “prominent” Indonesian financial institution following a spate of more than 1,100 deepfake fraud attempts being used to bypass their loan application security processes. With more than 1,000 fraudulent accounts detected, and a total of 45 specific mobile devices identified as being used in the fraud campaign, mostly running Android but a handful were also using the iOS app, the team was able to analyze the techniques used to bypass the “Know Your Customer” and biometric verification systems in place.

“The attackers obtained the victim’s ID through various illicit channels, Yuan Huang, a cyber fraud analyst with Group-IB, said, “such as malware, social media, and the dark web, manipulated the image on the ID—altering features like clothing and hairstyle—and used the falsified photo to bypass the institution's biometric verification systems.” The deepfake incident raised significant concerns for the Group-IB fraud protection team, Huang said, but the resulting research led to the highlighting of “several key aspects of deepfake fraud.”

MORE FOR YOU
Google Confirms New Gmail Security Surprise—And It’s So Simple
NYT ‘Strands’ Today: Hints, Spangram And Answers For Wednesday, December 4th
Today’s NYT Mini Crossword Clues And Answers For Wednesday, December 4

Key Discoveries Uncovered By The Group-IB Research Into The AI Deepfake Attack

The key discoveries resulting from the Group-IB investigation into the Indonesian cyber attack were as follows:

AI Deepfake Fraud Has A Financial And Societal Impact

The Group-IB fraud investigators determined that AI deepfake fraud of the type used against this one financial institution posed a significant financial risk. “Potential losses in Indonesia alone,” Huang said, were “estimated at $138.5 million.” Then there’s the social implications which included threats to personal and national security as well as the integrity of financial institutions with all the economic impact that applies. To reach the $138.5 million figure, Group-IB estimated that approximately 60% of Indonesia’s population was “economically active and eligible for loan applications,” which works out to about 166.2 million individuals aged between 16 and 70. With a detected fraud rate of 0.05% in the bank it was analyzing, this resulted in an estimate of 83,100 fraud cases nationwide and given an average fraudulent loan size of $5,000, Group-IB said, “the estimated financial damage could reach US$138.5 million over three months.”

AI Deepfakes And Advanced App Cloning At Play

The report highlighted that the fraudsters in this case used AI-generated deepfake images to bypass the biometric verification systems, which included being able to get around the liveness detection protections. “Leveraging advanced AI models,” Huang explained, “face-swapping technologies enable attackers to replace one person’s face with another’s in real time using just a single photo.” This not only creates an illusion of legitimate identity of an individual in the video but, Huang continued, “these technologies can effectively deceive facial recognition systems due to their seamless, natural-looking swaps and the ability to convincingly mimic real-time expressions and movements.” The fraudsters also exploited virtual camera software to manipulate the biometric data using pre-recorded videos to mimic real-time facial recognition.The use of app cloning further enabled the fraudsters to simulate multiple devices, highlighting vulnerabilities in traditional fraud detection systems.

AI Deepfakes Have Introduced Unprecedented Security Challenges For Financial Institutions

There can be no doubt, Huang said, that the emergence of AI deepfake technologies has introduced unprecedented challenges for financial institutions, “disrupting traditional security measures and exposing vulnerabilities in identity verification processes.”

The Group-IB investigation has certainly brought the multifaceted issues of deepfakes into the light, covering everything from emulation exploitation to app cloning to help these advanced AI attacks evade detection. “These tactics enable fraudsters to impersonate legitimate users, manipulate biometric systems, and exploit gaps in existing anti-fraud measures,” Huang warned, concluding that “financial institutions must move beyond single-method verification, enhancing account verification processes and adopting a multi-layered approach that integrates advanced anti-fraud solutions.”