Key Takeaways
- Pentagon’s Chief Technology Officer stated that Claude AI contains hardcoded policy preferences that may undermine military capabilities.
- For the first time in history, a U.S.-based company received a Defense Department supply chain risk designation from Anthropic.
- All defense contractors working with the Pentagon must now verify they aren’t utilizing Claude in their operations.
- On Monday, Anthropic filed federal litigation against the Trump administration, describing the action as “unprecedented and unlawful” with hundreds of millions at stake.
- Palantir CEO Alex Karp revealed his firm continues deploying Claude for American military missions despite the restriction.
Earlier this month, the Department of Defense took unprecedented action by classifying Anthropic as a supply chain security threat — marking the first time an American corporation has received this designation typically reserved for foreign competitors.
During a Thursday appearance on CNBC’s “Squawk Box,” Defense Department Chief Technology Officer Emil Michael outlined the rationale behind this extraordinary measure. According to Michael, the concern centers on Claude’s foundational “constitution” — Anthropic’s framework for governing AI behavior — which he believes embeds specific policy viewpoints that could influence military applications.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
In January 2026, Anthropic released the latest iteration of Claude’s constitutional framework. According to the company, this document serves a “crucial role” in model development and “directly shapes Claude’s behavior.”
The supply chain classification requires all Pentagon vendors and defense contractors to formally attest that Claude is not incorporated into any deliverables or services provided to the military.
Michael characterized the move as not designed to punish the company. He further observed that government contracts represent just a “tiny fraction” of Anthropic’s total business.
Anthropic emerged in 2021 when several researchers departed from OpenAI to establish their own venture. The startup has successfully developed a robust enterprise client base, securing initial Defense Department agreements in its early years.
The company responded forcefully to the Pentagon’s decision. Anthropic initiated legal action against the Trump administration on Monday, characterizing the supply chain designation as “unprecedented and unlawful.”
According to court documents, Anthropic argues it faces “irreparable” damage, with contract values totaling hundreds of millions of dollars now uncertain.
Pentagon Refutes Claims of Direct Company Contact
Michael rejected Anthropic’s allegations that government officials were proactively contacting businesses to discourage Claude adoption. He characterized such assertions as mere “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He further recognized that phasing out Claude won’t happen instantaneously. The Pentagon has established a transition strategy, he explained, recognizing that extracting deeply embedded AI systems requires considerably more effort than uninstalling standard software.
Military Operations Continue Using Claude Despite Ban
Notwithstanding the official designation, Claude remains operational in certain military capacities. CNBC has previously documented the AI’s deployment supporting American military activities in Iran.
On Thursday, Palantir CEO Alex Karp — whose company ranks among America’s premier defense contractors — acknowledged his organization continues utilizing Claude.
Michael stated the department cannot “just rip out” Anthropic’s technology immediately and verified a phased transition approach is in progress.





