AI at war: Anthropic defies Pentagon, Trump orders federal ban

by · Greater Kashmir

Washington, Feb 28: In a public confrontation, perhaps for the first time between Silicon Valley and Washington DC, between AI and war weapons, Anthropic has refused to grant the Pentagon unrestricted access to its artificial intelligence systems, triggering a sharp response from President Donald Trump and raising wider questions about who ultimately controls battlefield AI.

According to TechCrunch, Anthropic’s chief executive Dario Amodei said that he “cannot in good conscience accede to [the Pentagon’s] request” for unfettered use of the company’s AI models. The remarks came less than 24 hours before at 5:01 p.m. Friday deadline imposed by US Defence Secretary Pete Hegseth.

In a written statement quoted by TechCrunch, Amodei stressed that “Anthropic understands that the Department of War, not private companies, makes military decisions.” However, he added that in a “narrow set of cases” AI systems could “undermine, rather than defend, democratic values,” and that some applications remain beyond what current technology can safely and reliably deliver.

At the centre of the dispute are two red lines that Anthropic insists it will not cross: the use of its AI tools for mass surveillance of Americans and the deployment of fully autonomous weapons without a human decision-maker “in the loop.” The Pentagon, by contrast, has argued it should be able to use Anthropic’s models for any lawful purpose and that such restrictions should not be dictated by a private company.

TechCrunch reported that contract language sent overnight by the Department of Defence made “virtually no progress” in preventing the use of Anthropic’s Claude model for mass surveillance or autonomous weapons. An Anthropic spokesperson told the outlet that new wording presented as compromise was paired with legal clauses that would allow safeguards to be disregarded “at will.”

Despite recent public statements from the Department of War, a secondary name used by President Trump for the Defence Department, these two safeguards “have been the crux of our negotiations for months,” the spokesperson said.

The Pentagon has reportedly attempted to pressure Anthropic in two ways: by threatening to designate the firm a “supply chain risk”, a classification typically reserved for foreign adversaries, or by invoking the Defence Production Act (DPA), which grants the president authority to compel companies to prioritise national defence production.

Amodei highlighted the contradiction in these moves. “One labels us a security risk; the other labels Claude as essential to national security,” he wrote. While acknowledging it is the Department’s right to choose contractors aligned with its vision, he expressed hope that officials would reconsider, citing the “substantial value” Anthropic’s technology provides to US armed forces.

Anthropic is currently the only frontier AI laboratory with systems reportedly cleared for classified military use, although the Department of defence is said to be preparing rival firm xAI for similar work.

As reported by the BBC, President Trump escalated the standoff on Friday. In a series of posts on his Truth Social platform, he announced that he would direct every federal agency to “immediately stop using” Anthropic’s technology.

“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote. He added that Anthropic’s tools would be phased out of all government work over the next six months and warned the company to be “helpful during this phase out period” or face “major civil and criminal consequences.”

The President also labelled Anthropic “woke” and accused it of being an “out-of-control, Radical Left AI company.” In a further move, Defence Secretary Hegseth said on X that Anthropic would be “immediately” designated a supply chain risk, effectively barring military contractors from engaging in commercial activity with the firm. If enforced, it would mark the first time a domestic US company has been given such a designation.

Hegseth had earlier summoned Amodei to Washington, where discussions reportedly ended in two ultimatums: grant the Department “any lawful use” of Anthropic’s tools, or face invocation of the DPA and a security-risk label. Amodei responded that he would rather end the partnership than compromise on the company’s safeguards.

Before Trump’s announcement, Anthropic had said that if the Department chose to discontinue its services, the firm would “work to enable a smooth transition to another provider,” avoiding disruption to military planning or critical missions.

The dispute has also drawn industry-wide attention. According to the BBC, Sam Altman, chief executive of OpenAI and a long-time rival of Amodei, circulated a memo to staff expressing support for similar red lines. Altman wrote that any OpenAI defence contracts would reject uses that are “unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”

Altman and Amodei still cntinue to remain rivaals aand share a complicated history: Amodei rose to prominence as an early OpenAI employee before departing with colleagues to found Anthropic following disagreements over governance and direction. The two firms now compete directly in the fast-evolving AI marketplace.

In his memo, Altman reportedly admitted he did not fully understand how Anthropic’s original deal with the Pentagon and data analytics firm Palantir had been structured, but emphasised that the issue was no longer confined to a single company. “This is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry,” he wrote.

International media reported that labour groups have also entered the fray. On Friday morning, representatives of roughly 700,000 tech workers at Amazon, Google and Microsoft signed an open letter urging their employers to “refuse to comply” with the Pentagon’s demands. The Alphabet Workers Union declared in a separate statement that “tech workers are united in our stance that our employers should not be in the business of war.”

A former Defence Department official, speaking anonymously to the BBC, suggested Anthropic may hold the stronger position. The DoD’s legal basis for invoking the Defence Production Act or branding the firm a supply chain risk was described as “extremely flimsy.” Given Anthropic’s soaring valuation and diversified commercial base, the official added, the company “simply does not need the money.”

For now, both sides appear prepared to part ways. As one observer summarised the tone of Amodei’s remarks to TechCrunch: the message was less confrontational than principled — a signal that Anthropic is willing to disengage without rancour if its safeguards cannot be respected.

The clash marks a defining moment for observers who argue that autonomous war weapons should not be misused by AI and should always remain under human control amid the emerging and changing military AI governance. A test not only of contractual language, but of whether ethical boundaries drawn in corporate boardrooms can withstand the demands of national security. (Inputs from TechCrunch & BBC).