The Race Is on to Keep AI Agents From Running Wild With Your Credit Cards

by · WIRED

Comment
LoaderSave StorySave this story
Comment
LoaderSave StorySave this story

Between malware, online impersonation, and account takeovers, there are enough digital security problems out there as it is. And with the rise of agentic AI, more activity is being carried out by agents on behalf of humans—creating different risks that something could go awry.

Now, working with initial contributions from Google and Mastercard, the authentication-focused industry association known as the FIDO Alliance said on Tuesday that it will launch a pair of working groups to develop industry standards for validating and protecting payments and other transactions carried out by AI agents.

The goal is to produce a protective baseline that can be adopted across industries. This way, users can authorize agent actions using mechanisms that can't easily be phished, or taken over by a bad actor to give an agent rogue instructions. The standards would also include cryptographic tools that digital services could use to confirm agents are accurately and legitimately carrying out an authenticated person's instructions, as well as privacy preserving frameworks to give users, merchants, and other service providers the ability to validate transactions being initiated by agents. In other words, the goal of the work is to create protections against agent hijacking or other rogue behavior, as well as transparency and accountability mechanism for recourse in the event of a dispute.

“Agents are becoming more and more common, they're moving into mainstream use, but preexisting models aren’t necessarily designed for this sort of paradigm—they weren't built to contemplate actions performed on a user’s behalf,” Andrew Shikiar, CEO of the FIDO Alliance, tells WIRED.

He adds, “If we look back on our work in recent years on the massive problem space of passwords, that originated decades ago. The security foundation for what became our connected economy wasn’t fit for purpose. Now we’re at a similar precipice with agentic agents and agentic interactions, agentic commerce where we have an opportunity to not go down that same path and establish some foundational principles that will allow for more trusted interactions."

Developing technical standards that are widely applicable across industries and facilitate interoperability is a painstaking process that often takes years. But given the rapid advancement and adoption of agentic AI, representatives of the FIDO Alliance, Google, and Mastercard all emphasized that this process must move more quickly. To this end, both companies are contributing open source tools to the initiative. Google's Agent Payments Protocol, or AP2, offers a mechanism for cryptographically verifying that a user really intended for a given agent-initiated transaction to take place. Mastercard's Verifiable Intent framework (codeveloped by Google to work with AP2) is a secure mechanism for users to authorize and control agent actions.

“We want to provide cryptographic proof that a transaction was authorized by the user themself, but keep it private so there is built-in selective disclosure," says Stavan Parikh, Google’s vice president and general manager of payments. “Different players in the ecosystem—platforms, merchants, payment providers, networks—only see the information that’s relevant to them, but the right action gets fulfilled at the right time. Payments is a complex ecosystem problem"

Parikh offers the example of a person who goes to buy a pair of sneakers but finds that they are sold out. The buyer instructs an AI agent to autonomously purchase the sneakers if they ever come back in stock and cost $100 or less. The goal is to provide authentication and transparency around this transaction so if the perfect sneaker drop ever comes around, the consumer ends up with the right shoes at the price they intended.

Establishing these baseline protections is key to promoting trust in agentic AI and promoting adoption of AI-powered tools, Parikh notes. Whether users are looking to adopt AI capabilities or not, though, the reality of their proliferation means that minimum guardrails are necessary either way.

While the AP2 and Verifiable Intent contributions will give the working groups a major head start, they will still need to build out a body of practical examples and use cases to ensure that the tech will work in real life. And then users, platforms merchants, payment providers and others across sectors will need to be able to realistically adopt and support the protocols at scale.

Looking at the pace of development in agentic AI, Pablo Fourez, Mastercard’s chief digital officer, emphasizes that the urgency behind this Fido Alliance effort is justified.

“This tech is evolving very, very fast, so it compresses standards timelines that in the past might have taken two or three years,” Fourez says. “Regular people just want to know at the end of the day that it will work and they can trust it. And we will always have the cardholder's back, but when bad actors exploit something like this the cost of supporting that is very high. We need to get this tech adopted so we can stand behind consumers and merchants in an effective way.”