US judge blocks Pentagon from labelling Anthropic supply chain risk
A US judge has temporarily blocked the Pentagon's decision to designate Anthropic a "supply chain risk." The judge made this call after the Dario Amodei-led firm challenged the Pentagon's decision in a California federal court over disagreements on unrestricted AI use.
by Armaan Agarwal · India TodayIn Short
- US judge blocks Pentagon’s supply chain risk label on Anthropic
- Judge says the label punishes Anthropic for disagreeing with US government
- Anthropic received the label after a fallout with the Pentagon over unrestricted AI use
Anthropic has bagged a major win in its tussle with the US Pentagon. A US federal judge has temporarily blocked the Pentagon from labelling the AI startup as a “supply chain risk.” The Dario Amodei-led firm was given this designation by the US Department of Defense following disagreements over unrestricted AI use.
US district judge Rita Lin gave a 43-page order that would temporarily block the supply chain risk designation. However, this will go into effect after seven days, to give a chance to the administration to appeal.
Anthropic spokesperson Danielle Cohen said in a statement, "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
Why did US judge block supply chain risk label for Anthropic?
In the order, Rita Lin stated that the supply chain risk label on Anthropic was being used as a way to “punish” Anthropic “for criticizing the government’s contracting position in the press.”
Anthropic's legal action, filed in a California federal court, argues that Defence Secretary Pete Hegseth exceeded his authority by branding the company a national security risk. The label, typically reserved for foreign firms linked to adversaries such as Huawei, restricts Anthropic from certain military contracts.
The judge added that the US administration could not put such a label on an American firm for such a disagreement. The order stated, “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The lawsuit claims the government retaliated against Anthropic's public stance on AI safety, infringing on its First Amendment right to free speech. The company also contends it was denied due process, violating its Fifth Amendment rights, as it was not given an opportunity to dispute the designation. The company also claims that this ruling could cost it billions of dollars in lost revenue.
Anthropic has a separate lawsuit in Washington DC against the Pentagon’s supply chain risk tag that could exclude the AI startup from civilian government contracts.
Why are Anthropic and the US Pentagon fighting in court?
Anthropic had walked away from a deal with the Pentagon for the usage of its AI systems on classified networks, after the AI startup became concerned over potential use of AI for mass domestic surveillance or the development of autonomous weapons.
The Dario Amodei-led startup had wanted proper guardrails in its agreement that would prevent such possibilities. On the other hand, the US Department of Defense claimed that it would only use AI for “all lawful purposes.”
Anthropic’s AI systems remain in use by the Pentagon. The US military is reportedly using Claude during its strikes on Iran as part of Operation Epic Fury. However, the US government has ordered a six-month transition period where Anthropic’s systems will be phased out for OpenAI’s models. The Sam Altman-led firm had rushed an agreement with the US Department of Defense just hours after Anthropic pulled out.
Do note that while the supply chain risk label prohibits all US defense suppliers from working with Anthropic, Microsoft and Google, two major tech firms with government contracts, have continued to give access to Anthropic’s products via their consumer platforms.
- Ends