Key Points
- Pentagon demands Anthropic strip Claude AI of safety restrictions to enable unrestricted lawful applications, including autonomous weaponry and mass surveillance operations.
- Dario Amodei, Anthropic’s CEO, declined the request, citing threats to “democratic values.”
- Pentagon issued a Friday 5pm ultimatum for compliance or exclusion from defense contracting opportunities.
- Defense officials threatened invoking the Defense Production Act to compel cooperation and designate Anthropic as a “supply chain risk.”
- Revised contract terms delivered Wednesday evening were dismissed as offering “virtually no progress.”
Dario Amodei, CEO of Anthropic, has taken a firm stance against Pentagon pressure to eliminate safety restrictions on Claude AI, jeopardizing a significant government partnership. Defense officials delivered an ultimatum with a Friday cutoff time, insisting the company accept “any lawful use” terms for its artificial intelligence platform.
At the heart of this confrontation are two critical issues: deploying Claude for widespread domestic surveillance operations and enabling completely autonomous weapon systems. Anthropic maintains these applications were never included in original Pentagon agreements and should remain prohibited.
Earlier this week, Amodei held discussions with Defense Secretary Pete Hegseth. The talks concluded without resolution, prompting Pentagon officials to transmit updated contract terms Wednesday evening.
Anthropic dismissed the revised proposal. A company representative stated it represented “virtually no progress” and contained legal language allowing safeguards to “be disregarded at will.”
Pentagon officials have issued stark warnings. They’ve threatened to exclude Anthropic from defense work and classify the organization as a “supply chain risk” — a label normally applied to entities in adversarial countries.
A high-ranking Pentagon source informed Reuters that Secretary Hegseth is prepared to invoke the Defense Production Act. This legislation enables forcing private companies to fulfill national defense requirements without voluntary agreement. Legal scholars have raised doubts about whether such application would withstand legal scrutiny.
Anthropic’s Position on Military AI Applications
In a published statement, Amodei argued that current AI technology remains “simply not reliable enough to power fully autonomous weapons.” He emphasized that deployment without human control endangers military personnel and civilian populations alike.
Regarding surveillance capabilities, he cautioned that AI systems can “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale.”
Anthropic expressed support for AI applications in lawful foreign intelligence operations while drawing a line at domestic surveillance.
Defense officials countered these concerns, with Undersecretary Emil Michael asserting that Anthropic’s feared applications are already prohibited under existing law and military regulations. Michael directly challenged Amodei on social media platform X, claiming he “wants nothing more than to try to personally control the US Military.”
Financial Implications for Anthropic
The economic consequences are substantial. Over the last year, the Pentagon has established $200 million framework agreements with leading AI companies including Anthropic, OpenAI, and Google.
Receiving a supply chain risk designation would prevent defense contractors like Lockheed Martin from incorporating Anthropic’s technology into Pentagon initiatives. The defense contractor ecosystem encompasses approximately 60,000 companies.
Amodei indicated Anthropic proposed collaborating with Pentagon researchers to enhance AI dependability for defense applications, but officials rejected this alternative.
As of Thursday evening, negotiations remained deadlocked with the Friday 5:01 p.m. deadline unchanged.





