Key Takeaways
- The Pentagon terminated all Anthropic AI deployments across government agencies, designating the company as a national security supply-chain threat.
- OpenAI rapidly secured a Defense Department contract to deploy its AI systems in classified military environments immediately after Anthropic’s removal.
- A $200 million Pentagon contract with Anthropic collapsed after the AI company refused to authorize its technology for autonomous weapons and mass surveillance applications.
- OpenAI asserts its Pentagon agreement includes the same usage restrictions Anthropic insisted upon, though critics question whether these limits will be enforced.
- Anthropic intends to challenge the supply-chain risk designation in court, calling the government’s decision legally baseless.
The United States government terminated its partnership with Anthropic on Friday, designating the artificial intelligence company as a supply-chain security risk. Within hours, rival firm OpenAI announced a new contract to deploy its AI technology across the Pentagon’s classified systems.
President Donald Trump issued an executive order requiring all federal agencies to immediately discontinue use of Anthropic’s AI products. Government entities currently using Claude AI systems must transition to alternative providers within six months.
Defense Secretary Pete Hegseth announced on X that Anthropic poses a “Supply-Chain Risk to National Security.” This designation is typically reserved for companies tied to adversarial nations like China.
The ramifications extend beyond direct government contracts. Companies working with the Defense Department may need to verify they’ve completely removed Claude from their technology stacks. Tech giants like Nvidia, Amazon, and Google have invested in and partnered with Anthropic.
Anthropic had been the first AI company to successfully deploy its models within the Pentagon’s classified computing infrastructure. That partnership, established in July, was valued at up to $200 million.
The relationship deteriorated when Anthropic refused to guarantee its AI would be available for all lawful military uses. The company drew hard lines against autonomous weapons systems and mass domestic surveillance operations.
Military officials argued Anthropic should trust the Department of Defense to comply with existing laws. Anthropic CEO Dario Amodei said Thursday his company “cannot in good conscience” agree to those terms.
OpenAI Steps Into Pentagon Role
OpenAI CEO Sam Altman announced the new Pentagon partnership late Friday via X. He claimed the agreement contains the same prohibitions on mass surveillance and autonomous weapons that Anthropic had demanded.
Altman added that OpenAI has urged the administration to apply equivalent contractual terms to all AI vendors. Elon Musk’s xAI had already received clearance to operate within classified government systems.
OpenAI President Greg Brockman and his wife donated $25 million to a pro-Trump political action committee last year. They’re continuing to financially back Trump’s AI agenda in upcoming elections.
Anthropic Plans Court Challenge
Anthropic said it was “deeply saddened” by the designation and plans to fight it in court. The company called the decision “legally unsound” and warned it sets a dangerous precedent for American tech firms negotiating with the government.
The General Services Administration removed Anthropic from its approved vendor list for government procurement.
Some observers criticized OpenAI’s move. Democratic activist Christopher Hale posted on X that he was canceling his ChatGPT subscription and switching to Claude Pro Max.
Anthropic was founded in 2021 by former OpenAI researchers concerned about the company’s waning focus on safety. Both firms have raised tens of billions in funding and are considering initial public offerings.
The dispute also centered on a specific incident. After Claude was used in a Venezuela operation in January, an Anthropic employee reached out to a Palantir contact to ask about how the technology was being used. Pentagon officials viewed this as inappropriate meddling.
Anthropic said the conversation was routine technical coordination between business partners.





