Senate Judiciary Committee Advances Hawley's GUARD Act, Mandating ID Verification for AI Chatbot Users

by · Reclaim The Net

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a “targeted, tailored effort,” telling the committee, “We’re often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that’s a choice, not an inevitability.”

Senator Richard Blumenthal of Connecticut, the lead Democratic co-sponsor, signed onto the bill alongside Senators Mark Warner, Chris Murphy, Katie Britt, and Mark Kelly. The bipartisan support means the bill arrives on the Senate floor with momentum that age-verification proposals usually lack.

What that floor vote would authorize is a national identity system for AI services.

The bill includes data-minimization language. It also requires periodic re-verification, which means the sensitive identity documents collected at signup either sit in a company’s database waiting for a breach or get re-uploaded on a schedule.

Both options are surveillance infrastructure.

Trade group NetChoice, opposing the bill before the committee vote, framed the data-collection problem in security terms. “NetChoice implores the Senate Judiciary Committee to safeguard Americans’ most secure documents and reject the GUARD Act,” said the group’s Patrick Bos.

“If implemented, such a broad and vague provision would force AI companies to collect and store highly sensitive personal data into honeypots ripe for cybercriminals to exploit through breaches, identity theft and fraud.”

Age-verification vendors have been breached repeatedly, exposing the government IDs and biometric scans of millions of users who handed them over to access entirely legal content. The GUARD Act would multiply those targets by routing every AI interaction in the country through similar collection systems.

The bill’s reach is what makes the privacy cost so steep. A teenager asking a chatbot for algebra help would need to be cleared through age verification, and so would the adult sitting next to them. A customer trying to fix a billing problem through a company’s automated assistant would face the same identity check.

Faced with the cost of building those systems and the threat of $100,000 per-offense penalties, smaller developers will plausibly block younger users entirely or strip their tools down until they no longer trigger the bill’s definitions. The compliance burden lands on everyone who uses these services, and the largest companies, the ones that can absorb verification infrastructure as a cost of doing business, end up consolidating the market.

The bill isn’t promoting parental supervision. Instead, it’s going for a flat ban. The legislation contains no parental consent mechanism that would let a parent decide their fifteen-year-old can use a homework chatbot.

There is no appeals process for users wrongly flagged as underage by an algorithmic age-estimation system. A user judged by a verification service to be under 18 is locked out, period, regardless of what their parents think.

The criminal provisions are where the bill’s child-safety framing has the firmest grip. Companies that knowingly design or distribute chatbots that solicit sexually explicit content from minors, or that encourage suicide, self-injury, or imminent violence, would face fines of up to $100,000 per offense.

Those provisions respond directly to the cases that drove the bill, including testimony from parents whose children harmed themselves or died after extended interactions with AI companions. Several of those parents sat in the committee room during Thursday’s markup.

The question is whether a national ID-verification regime is what addresses them, or whether the bill uses the worst chatbot interactions as leverage to build identity infrastructure that reaches every chatbot, including the ones nobody is alleging caused harm.

The bill also arrives inside a larger legislative vehicle. Senator Marsha Blackburn intends to fold the GUARD Act into her TRUMP AI Act, which would carry President Trump’s National Framework on AI through Congress and preempt conflicting state AI laws.

The GUARD Act itself contains a similar preemption clause, displacing state laws that conflict with it while carving out room for states to legislate separately for children under 13. Federal preemption of state AI rules has been controversial.

The Senate rejected a previous attempt to fold broad preemption into a different bill earlier this year. The GUARD Act offers a narrower vehicle for the same outcome, packaged inside child-safety language that makes opposition politically expensive.

Blumenthal acknowledged that the unanimous committee vote is not the end of the process.

The bill faces the full Senate next, then the House. The pattern of recent age-verification legislation suggests the substantive privacy questions will keep being asked, and keep being answered with the argument that any cost is acceptable if children are invoked.

The infrastructure being authorized here, though, will not check whether a user is a child before it asks for their ID. It will ask everyone. That’s what the bill requires. It is also what the bill is likely for.