AI-driven e-commerce fraud is surging, but you can fight back with more AI

Juniper Research argues the only way to beat them is to join them

by · The Register

E-commerce fraud is expected to surge in the next five years thanks to AI, and merchants are advised to respond with ... AI.

Juniper Research, a Hampshire, UK-based consultancy, put out a report on Monday predicting that the value of e-commerce fraud will rise from $44.3 billion in 2024 to $107 billion in 2029 – a 141 percent increase.

The firm says AI tools have allowed fraudsters to stay ahead of security measures and have enabled attacks with greater sophistication, scale, and frequency. It points to the ease with which the creation of fake accounts and synthetic identities can be automated to defraud merchants. And these attacks, it's claimed, can overwhelm rules-based prevention systems.

Thomas Wilson, the report's author, said in a statement, "E-commerce merchants must seek to integrate fraud prevention systems that offer AI capabilities to quickly identify emerging tactics. This will prove especially important in developed markets, where larger merchants are at higher risk of being targeted for fraud, such as testing stolen credit cards."

The potential for AI to help craft credible scams has become a matter of broad public concern. In May, California Attorney General Rob Bonta warned Californians about AI-powered hoaxes that rely on "deepfakes" to impersonate family members and government officials. And the FTC last month announced Operation AI Comply, five legal actions against companies making exaggerated AI claims or selling AI technology that can be used to deceive.

Academics studying AI safety have also sounded the alarm about the deceptive potential of AI. Last year, in a preprint paper, researchers from MIT, Australian Catholic University, and the Center for AI Safety said, "Various AI systems have learned to deceive humans. This capability creates risk. But this risk can be mitigated by applying strict regulatory standards to AI systems capable of deception, and by developing technical tools for preventing AI deception."

Political leaders, however, have rejected strict regulatory standards, over concerns about economic harm. Last month in California, for example, Governor Gavin Newsom refused to sign SB 1047, regarded as one of the broadest attempts to legislate AI to date. AI companies lobbied against the bill.

Nonetheless, other AI-related proposed rules that aim to address AI-enabled fraud, like the No AI Fraud Act, await adoption by US lawmakers. Europe's Artificial Intelligence Act, a comprehensive legal framework for AI transparency and accountability, took effect in August and most of its provisions will be enforced by August 2026.

Juniper's contribution to these concerns involves urging merchants to fight fire with fire, so to speak, because AI fraud detection mechanisms can be helpful in addressing first-party scams – when customers knowingly defraud merchants for personal gain – and other forms of fraud. "For example, AI can detect unusual spending patterns, unexpected changes in customer behavior, or multiple accounts associated with a single device," the firm explains in a white paper.

There are drawbacks however: Lots of data is required, and the infrastructure and talent required to run these systems come at a cost. Also, AI fraud detection may generate false positives. "Genuine customers who use unfamiliar browsers and VPNs (Virtual Private Networks) are more likely to be flagged as fraudulent users; reducing customer satisfaction and losing revenue for the merchant," Juniper Research explains.

In addition, the AI involved – machine learning – often operates in a way that's not easily explained, which makes it difficult to improve fraud prediction algorithms based on observed errors.

Nonetheless, Juniper's answer to AI is more AI, which doesn't seem like it will end well. ®