Anthropic's Claude claws its way towards the top of the AI market
Who knew questioning authority and signaling virtue would lead to growth?
by Thomas Claburn · The RegisterAnthropic has been killing it in the business market, success that appears to be at least partially attributable to pushback against the Pentagon.
The maker of the Claude family of models saw business software subscriptions grow 4.9 percent month over month in February, according to AI fintech biz Ramp, a period during which OpenAI's subscription share fell 1.5 percent.
In January, Anthropic's subscription share grew 2.8 percentage points while OpenAI adoption slipped 0.9 percentage points, the company said.
OpenAI still leads in overall business subscription market share, 34.4 percent to 24.4 percent, but Anthropic has been catching up fast.
"Nearly one in four businesses on Ramp now pays for Anthropic (a year ago, it was one in 25)," said Ara Kharazian, an economist for Ramp, in a blog post. "OpenAI's 1.5 percent decline was the largest in any single month for any AI model company since we started tracking business AI adoption."
According to Kharazian, businesses selecting AI services for the first time now choose Anthropic about 70 percent of the time.
Coincidentally, OpenAI is reportedly revising its strategy to focus on selling AI to businesses and software developers, the very markets in which Anthropic appears to be prospering.
It looks like the consumer market is amplifying business preferences. In late January, Reuters reported on a rift between Anthropic and the Defense Department over Anthropic's refusal to remove model guardrails to make its models more amenable to military applications.
Having positioned itself as the responsible AI company, only to walk that back a little amid its government negotiations, Anthropic pushed back publicly at the end of February.
That didn't endear Anthropic to the Trump administration. On March 4, the AI biz said it received notice that Washington designated it a supply chain risk to US national security, and filed lawsuits challenging its excommunication by the Defense Department.
While Anthropic's public commitment to responsible AI may be somewhat overstated – its models were reportedly used in the US special military operation to capture former Venezuelan President Nicolás Maduro in January – its revival of Google's long-abandoned "Don't be evil" messaging has raised its profile among people who take such sentiment at face value.
As noted by app tracking biz Sensor Tower, Anthropic's Defense Department dustup coincided with a surge in Claude installations and in ChatGPT removals. OpenAI's decision to do business with the Pentagon and CEO Sam Altman's acknowledgement that OpenAI handled the situation poorly probably didn't help.
Pointing to public endorsements of Claude by celebrity musician Katy Perry and US Senator Brian Schatz that followed Anthropic's disagreement with the Defense Department, Kharazian said, "Anthropic positioned itself differently, and a certain class of user noticed."
That same class of people – those who can afford to pay $20 or $200 per month for access to Claude – may also have some animus toward OpenAI for putting ads in ChatGPT.
Anthropic in February talked up its growing annual revenue run rate, now standing at $14 billion, as it celebrated raising another $30 billion to continue operating. It's worth noting, however, that in a court filing [PDF] from nine days ago, Anthropic CFO Krishna Rao said the company has won over $5 billion of revenue since entering the commercial market.
Anthropic did not respond to a request for comment. ®