TLDR
- Sam Altman, OpenAI’s CEO, acknowledged the Pentagon partnership was hastily executed and appeared “opportunistic and sloppy”
- Contract revisions now explicitly prohibit using OpenAI’s technology for surveillance of American citizens
- Department of Defense has confirmed intelligence agencies including the NSA won’t have access to OpenAI’s systems
- The partnership was revealed shortly after Trump’s executive action blocking federal use of Anthropic’s products
- Altman has publicly advocated for extending identical contract opportunities to Anthropic
OpenAI Overhauls Pentagon Agreement Following Public Criticism
Sam Altman, the chief executive of OpenAI, has publicly acknowledged significant missteps in how his company rolled out its Department of Defense partnership. In what he characterized as an internal communication shared on X, Altman conceded the organization “shouldn’t have rushed” the deal’s public announcement.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman stated.
The partnership disclosure came last Friday, mere hours following President Donald Trump’s directive instructing federal entities to discontinue Anthropic’s AI services. The announcement also coincided with U.S. military operations targeting Iran.
The controversial timing sparked widespread criticism across social media platforms. Numerous users reportedly abandoned ChatGPT in favor of Anthropic’s Claude assistant following the news.
OpenAI has entered negotiations with Pentagon officials to modify the agreement’s language. These amendments are designed to explicitly incorporate the company’s ethical guidelines into the binding contract.
A significant provision now specifies that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” Defense officials have additionally confirmed that intelligence organizations like the NSA will not operate OpenAI’s technologies.
Providing services to such intelligence entities would necessitate separate contractual negotiations and modifications, Altman clarified.
How the Anthropic Dispute Set the Stage
These developments follow failed negotiations between Anthropic and military leadership. Anthropic had demanded assurances preventing its technology from supporting domestic surveillance operations or enabling fully autonomous weapon systems without meaningful human control.
Pete Hegseth, serving as Defense Secretary, announced Friday that Anthropic would receive a supply-chain threat classification after talks deteriorated. Government representatives had reportedly spent months criticizing Anthropic for what they viewed as excessive emphasis on AI safety protocols.
Public awareness of the conflict intensified after revelations that Anthropic’s Claude AI had supported U.S. military operations during a January mission targeting Venezuelan leader Nicolás Maduro. Anthropic notably refrained from public criticism of that deployment.
Interestingly, Anthropic had been the pioneering AI laboratory to integrate its models into the Defense Department’s secure, classified infrastructure through an agreement established last year.
Altman Calls for Equal Treatment of Anthropic
Altman’s statement also directly confronted the consequences facing Anthropic. He disclosed weekend conversations with government officials where he challenged the supply-chain designation.
“I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to,” he stated.
Anthropic emerged in 2021 when former OpenAI researchers departed following fundamental disagreements regarding the organization’s strategic priorities.
The company has consistently marketed itself as prioritizing AI safety above commercial considerations. Pentagon officials have yet to publicly acknowledge Altman’s proposal for contractual parity.





