Cloudflare, GoDaddy team up to curb AI bot brigades

Pair backs scraper blocking and standards to separate trusted agents from bad bots

by · The Register

Citing the need to adapt to an internet increasingly serving the needs of AI agents without considering the needs of site owners, Cloudflare and GoDaddy are partnering on efforts to control how AIs crawl the web and interact with web content.

The content delivery network and the web host on Tuesday announced that they would help website owners gain better control over their relationship with AI, primarily through GoDaddy integrating Cloudflare's AI Crawl Control utility into its platform. That tool, as the pair explained in a press release, lets owners manage how AI interacts with their websites, allowing, blocking, or requiring payment from crawlers for access.

"By putting tools like AI Crawl Control and open standards into the hands of website owners, we are providing essential underpinnings for a new Internet business model," Cloudflare chief strategy officer Stephanie Cohen said of the move. "We want to ensure that every creator has the tools to verify who is interacting with their site, while giving legitimate AI agents a secure, transparent way to participate."

Cloudflare has been beating the drum over the need to control bots' access to websites and web content, and has rolled out several measures aimed at restricting unauthorized scraping in recent years. In 2025, it rolled out an AI that it said would trap and waste the time of unauthorized AI scrapers by feeding them endless garbage, and it has previously pushed to require AI bots to pay for access to websites. 

Charging bots was one of Cloudflare's ideas to help protect website operators, who the CDN has noted are losing tons of money earned via web traffic since many search visitors are now instead fed the answers they're looking for by an AI, like Google's AI Overviews. Therefore, visitors are often less likely to click through to the original source.

The pair have skin in this game, clearly, as without website owners making money, they're unlikely to get paid themselves.

Good bots: The other side of the coin

If you set up roadblocks to stop bad bots, which the pair note is the point of this endeavor, then you're naturally going to end up catching a lot of good bots in the process, which is why Cloudflare and GoDaddy have also expressed support for new standards they believe will keep good bots in operation while restricting the reach of bad ones. 

The pair expressed support for the Agent Name Service (ANS), a proposal made last year that would act like a DNS system for AI agents, creating an open, protocol-agnostic registry of AI agents that would allow them to operate with a degree of trust and assurance by linking them to controllers, among other things. 

ANS was ultimately built by GoDaddy, we note, and is available on GitHub. The pair also threw their weight behind Cloudflare's Web Bot Auth method that relies on cryptographic signatures in HTTP messages to determine whether a request comes from an AI bot. The two technologies, the pair said, allow AI agents to identify themselves through cryptographically signed requests.

"With an open ecosystem of standards and methods for identifying agents, the agentic web can evolve with transparency built in by default," the pair said. 

Cloudflare and GoDaddy also noted that they hope to ensure "a fair value exchange in the Answer Engine era," meaning essentially that they want the internet to remain a human-first place where us meatbags are adequately compensated for our contributions to the digital world. 

Support for these standards, of course, doesn't mean everyone will adopt them, but if both Cloudflare and GoDaddy customers have access, then a good deal of the web is likely covered. It's estimated that around 20 percent of websites worldwide are behind a Cloudflare reverse proxy, and GoDaddy is a massive web hosting provider, too. 

That's not critical mass for AI agent standards to be considered … well, standard, but it's a start. It's also a helluva lot more likely to succeed than Sam Altman's eyeball-scan-for-an-AI-agent-license scheme, so there's that, too. 

Either way, something has to happen soon: MIT CSAIL's 2025 AI Agent Index, published in February, found that AI bots regularly ignore robots.txt restrictions, and few have released any safety data. Universally agreed upon rules are needed as they proliferate and change the shape of the internet. ®