TLDR
- The Trump administration faces a lawsuit from Anthropic following the Pentagon’s designation of the company as a “supply chain risk”
- This marks the first instance of an American firm receiving such a classification, typically applied to foreign entities
- The confrontation started when Defense Secretary Pete Hegseth required Anthropic to eliminate military-use limitations on its AI systems
- The AI company maintained its position against enabling its technology for autonomous lethal warfare or widespread surveillance operations
- According to company projections, the blacklisting threatens to eliminate several billion dollars from its 2026 earnings
The artificial intelligence firm Anthropic, creator of the Claude AI assistant, has initiated legal proceedings against the Trump administration in California’s federal court system. The company filed its complaint on Monday following the Department of Defense’s formal classification of Anthropic as a “supply chain risk.”
This classification requires defense contractors engaged with military operations to verify they are not utilizing Anthropic’s technology. The designation represents an unprecedented action against a domestic American technology company.
The dispute originated when Defense Secretary Pete Hegseth issued demands for Anthropic to eliminate all operational restrictions placed on its AI systems. The company rejected this request, maintaining that Claude would not be permitted for use in lethal autonomous weaponry or large-scale domestic surveillance programs.
These protective measures were incorporated into Anthropic’s initial agreements with government entities. The organization stated that Claude has not undergone testing for such applications and expressed concerns about the model’s reliability in those operational contexts.
In July 2024, Anthropic secured a $200 million agreement with the Department of Defense. The company also achieved a milestone as the inaugural AI research laboratory to implement its technology within the Pentagon’s classified network infrastructure.
President Trump issued a social media directive instructing federal agencies to “immediately cease” utilizing Anthropic’s technology. His statement characterized Anthropic as a “Radical Left AI company.”
Financial Impact
Krishna Rao, Anthropic’s Chief Financial Officer, stated in court documentation that governmental actions could diminish the company’s 2026 revenue “by multiple billions of dollars.” The filing indicates that current federal agreements are already facing termination.
Commercial sector partnerships face similar risks. The company projects that the circumstances could eliminate “hundreds of millions of dollars” in imminent revenue streams.
Anthropic has petitioned the court to nullify the supply chain risk classification and implement a stay pending case resolution. The organization has additionally filed proceedings in the U.S. Court of Appeals located in Washington D.C.
The lawsuit names more than twelve federal departments as defendants, encompassing the Treasury Department, the State Department, and the General Services Administration.
Industry Support
A coalition of over 30 artificial intelligence researchers and engineers from OpenAI and Google submitted a legal brief endorsing Anthropic’s position on Monday. Google’s chief scientist Jeff Dean was included among the signatories.
The coalition expressed concerns that penalizing a prominent American AI organization could undermine the nation’s competitive standing in artificial intelligence development.
Despite the blacklisting action, reports indicate that Anthropic’s systems were still employed to assist U.S. military operations in Iran, as previously documented by CNBC reporting.
Amazon verified that Anthropic’s Claude continues to be accessible for AWS customers in non-defense applications.
Anthropic stated its commitment to pursuing diplomatic engagement with government officials while simultaneously advancing the legal challenge.





