Key Takeaways
- Federal agencies received orders to cease all use of Anthropic’s AI systems, with the company designated as a national security supply-chain threat.
- Hours later, OpenAI secured a Pentagon agreement to integrate its AI technology into classified military systems.
- Anthropic lost its $200 million Pentagon partnership after declining to permit AI deployment for autonomous weaponry or mass domestic surveillance.
- OpenAI claims its Pentagon agreement incorporates identical limitations that Anthropic sought, though skeptics doubt enforcement.
- Anthropic announced plans to legally contest the supply-chain threat classification, describing it as without legal merit.
Federal authorities severed ties with artificial intelligence developer Anthropic on Friday, branding the company a supply-chain threat to national security. Before the day ended, competitor OpenAI revealed a fresh agreement to integrate its AI technology into the Pentagon’s classified infrastructure.
President Donald Trump directed all federal departments to immediately discontinue Anthropic’s technology. Agencies currently utilizing the platform received a six-month deadline to migrate away from Claude AI systems.
Defense Secretary Pete Hegseth announced via X that Anthropic represents a “Supply-Chain Risk to National Security.” This classification typically applies to entities connected with hostile nations such as China.
The decision carries implications beyond government contracts. Defense contractors may need to demonstrate they’ve eliminated Claude from their operations entirely. Anthropic counts Nvidia, Amazon, and Google among its major investors and strategic partners.
Anthropic had become the pioneering AI company to operate models within the Pentagon’s secure classified systems. The July agreement carried a potential value of $200 million.
Negotiations collapsed when Anthropic declined to promise unrestricted AI availability for all legal military applications. The company established firm boundaries against autonomous weapons systems and mass domestic surveillance programs.
Pentagon officials stated Anthropic merely needed to rely on military compliance with existing laws. Anthropic CEO Dario Amodei declared Thursday the company “cannot in good conscience” accept such terms.
OpenAI Secures Pentagon Partnership
OpenAI CEO Sam Altman revealed the Pentagon partnership late Friday via X. He indicated the arrangement incorporates identical restrictions against mass surveillance and autonomous weapons that Anthropic had demanded.
Altman added that OpenAI requested the government extend equivalent terms to all AI companies. Elon Musk’s xAI had previously received military authorization for classified system deployment.
OpenAI President Greg Brockman and his spouse contributed $25 million to a Trump-aligned political action committee last year. They continue funding efforts supporting Trump’s artificial intelligence initiatives in forthcoming elections.
Anthropic Announces Legal Challenge
Anthropic expressed being “deeply saddened” by the designation and intends to pursue legal remedies. The company characterized the decision as “legally unsound” and warned it establishes a troubling precedent for American technology companies negotiating with federal authorities.
The General Services Administration confirmed it will delete Anthropic from government agency procurement catalogs.
Certain observers criticized OpenAI’s timing. Democratic politician Christopher Hale posted on X that he terminated his ChatGPT subscription and migrated to Claude Pro Max.
Anthropic emerged in 2021 when researchers departed OpenAI citing concerns about inadequate safety prioritization. Both organizations have secured tens of billions in recent funding and are evaluating potential initial public offerings.
The controversy also involves a particular incident. Following Claude’s deployment during a January raid in Venezuela, an Anthropic staff member questioned a Palantir associate about the technology’s application. Pentagon leadership interpreted this inquiry as inappropriate interference.
Anthropic maintained the conversation represented standard technical coordination between partners.





