TLDR
- Pentagon pressures Anthropic to strip Claude AI of safety limitations for unrestricted defense applications, including autonomous weapons systems and mass surveillance programs.
- Anthropic CEO Dario Amodei refused the demands, warning of threats to democratic values.
- Defense Department set Friday 5pm deadline for compliance or removal from all military contracts.
- Pentagon threatens Defense Production Act enforcement and designation as supply chain security risk.
- Revised contract terms submitted Wednesday night rejected by Anthropic as insufficient.
Anthropic CEO Dario Amodei continues resisting Pentagon demands to strip safety protections from Claude AI, despite jeopardizing a major government contract. Defense officials have set a hard Friday deadline, requiring the company to agree to “any lawful use” of its artificial intelligence technology.
The dispute centers on two specific use cases: utilizing Claude for mass domestic surveillance programs and powering fully autonomous weapons systems. Anthropic maintains that neither capability was part of their original Pentagon agreements and should not be added now.
Amodei met with Defense Secretary Pete Hegseth earlier this week. The discussions ended without agreement, leading the Pentagon to deliver modified contract language Wednesday evening.
Anthropic rejected the changes. A company spokesperson said they showed “virtually no progress” and included provisions allowing safety restrictions to “be overridden at will.”
Pentagon leadership has issued stark warnings. Defense officials said they would remove Anthropic from all military contracts and designate the company as a “supply chain risk” — a label typically reserved for foreign adversaries.
A senior Pentagon official also told Reuters that Secretary Hegseth may invoke the Defense Production Act. This law allows the government to force companies to participate in national security projects, with or without their consent. Constitutional law experts have questioned whether such use of the statute would be legally valid.
What Anthropic Says About AI Weapons and Surveillance
In a public statement, Amodei contended that today’s AI systems are “simply not reliable enough to power fully autonomous weapons.” He stressed that deploying them without human oversight puts both service members and civilians at risk.
On surveillance, he warned that AI can “combine disparate, seemingly harmless information into a complete portrait of anyone’s life — automatically and at massive scale.”
Anthropic said it supports AI use for lawful foreign intelligence gathering, but opposes domestic mass surveillance programs.
Pentagon officials pushed back, with Undersecretary Emil Michael arguing that the uses concerning Anthropic are already forbidden under current laws and military policies. Michael confronted Amodei on X, alleging he “wants nothing more than to try to personally control the US Military.”
The Business Risk for Anthropic
The financial stakes are considerable. In the past year, the Pentagon has signed $200 million framework contracts with leading AI firms including Anthropic, OpenAI, and Google.
If Anthropic receives a supply chain risk label, defense contractors like Lockheed Martin would be barred from using Anthropic’s technology on Department of Defense projects. The defense contracting network includes roughly 60,000 companies.
Amodei said Anthropic offered to work with Pentagon officials on research initiatives to improve AI reliability for military purposes, but the offer was rejected.
As of Thursday night, the standoff continues with the Friday 5:01 p.m. deadline still in effect.





