White House AI Framework Pushes Age Verification ID Mandate
by Dan Frieth · Reclaim The NetThe White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”
The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.
Alongside a genuine free speech provision, the document contains age verification mandates, chat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.
The White House is presenting all of this as part of the same coherent package.
Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.” They are not.
There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.
The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.
The only way to prove that someone is old enough to use a site is to collect personal data about who they are.
In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.
Discord said the data was accessed through a third-party service provider. Discord’s own support pages had said it did “not permanently store personal identity documents or your video selfies,” and that images of identity documents were “deleted directly after your age group is confirmed.” 70,000 government IDs leaked anyway.
The promise of deletion and the reality of third-party data handling are different things.
The Tea and Discord breaches highlight regulators’ inability to prevent data retention or enforce data deletion in practice.
That’s one breach. In 2024, Australia greenlit an age verification pilot, and hours later, a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more.
The identity verification company AU10TIX left login credentials exposed online for more than a year, allowing access to data including users’ names, dates of birth, nationality, identification numbers, and the type of document uploaded, such as a driver’s license, along with images of those documents. This keeps happening because it has to keep happening. It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal.
The problem compounds when third parties are involved, which they always are. A platform doesn’t run its own verification infrastructure. It contracts it out. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store.
Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data.
Each additional company in the chain is another breach target, another entity that may retain data beyond its stated policy, another entity potentially beyond the reach of US enforcement.
Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. Each retained record becomes a potential breach target. Scale that experience across millions of users, and you bake the privacy risk into how platforms work.
There’s also the chilling effect that age verification creates before anyone’s data leaks. Anonymous and pseudonymous speech has always been part of how people participate in political life online.
Many of the world’s internet users live in countries where people have been arrested or imprisoned for posting content about political or social issues, and that number is actually increasing as European countries and the UK join those ranks.
In environments like these, there is considerable risk in connecting a person’s online activities to a photo of their face or their identification card.
The US isn’t typically one those countries. But the infrastructure being built here gets exported, copied, and adapted. The choice to create centralized identity databases for platform access is a choice about what the global internet looks like, not just domestic policy.
The framework’s “privacy protective” framing doesn’t engage with any of this. It uses the phrase to describe requirements it knows will force platforms to collect government-issued identification or biometric data from every adult user, route that data through third-party vendors, and retain enough of it to prove compliance to regulators.
The same section requires AI platforms likely to reach minors to “implement features that reduce the risks of sexual exploitation and self-harm to minors.” That sounds reasonable until you ask how an AI platform is supposed to detect self-harm content in real time across millions of users. The answer is mass scanning of user conversations.
The framework doesn’t say “mass surveillance.” It says “implement features.” The effect is the same.