The legal battle between Anthropic and the Pentagon took a dramatic turn on Friday when the AI company submitted two sworn declarations to a California federal court, directly contradicting the government's claim that the company poses an unacceptable risk to national security. The filings reveal a timeline that raises uncomfortable questions about whether the Pentagon's decision to cut ties with Anthropic was driven by genuine security concerns or by political pressure from the White House.
What the Filings Reveal
The declarations come from two senior Anthropic officials: Sarah Heck, the company's Head of Policy, and Thiyagu Ramasamy, its Head of Public Sector. Both were filed alongside Anthropic's reply brief in its lawsuit against the Department of Defense, ahead of a hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco. The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were severing the government's relationship with Anthropic after the company declined to allow unrestricted military use of its AI technology.
The Damaging Email
Perhaps the most striking detail in the filings is an email sent on March 4 by Under Secretary Emil Michael to Anthropic CEO Dario Amodei. In that message, Michael told Amodei that the two sides were very close to agreement on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans. The email was sent just one day after the Pentagon formally finalised its supply-chain risk designation against the company — the first time such a designation has ever been applied to an American firm. The timeline is striking. If Anthropic's stance on these issues was truly what made it dangerous, why was the Pentagon's own official saying the two sides were nearly aligned on those very same issues?
Anthropic Says the Claims Are Fabricated
Heck's declaration takes direct aim at what she describes as a central falsehood in the government's case: the assertion that Anthropic demanded an approval role over military operations. She states unequivocally that neither she nor any other Anthropic employee ever made such a demand during months of negotiations with the Defense Department. She also claims that the Pentagon's concern about Anthropic potentially disabling or altering its technology during active military operations was never raised during those negotiations. Instead, she says, this argument appeared for the first time in the government's court filings, leaving Anthropic with no chance to respond before it became part of the public record.
The Technical Rebuttal
Ramasamy's declaration addresses the technical side of the government's case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government clients, including classified environments. At Anthropic, he built the team that brought its Claude models into national security and defence settings, including a 200 million dollar Pentagon contract announced last summer. He argues that it is technically impossible for Anthropic to interfere with military operations once its models are deployed. The system runs inside an air-gapped, government-secured environment operated by a third-party contractor. There is no remote kill switch, no backdoor, and no mechanism to push unauthorised updates. Anthropic cannot even see what government users are typing into the system, let alone extract that data. Any changes to the model would require the Pentagon's explicit approval and action.
The Security Clearance Question
Ramasamy also pushes back against the government's claim that Anthropic's hiring of foreign nationals constitutes a security risk. He notes that Anthropic employees have undergone the same security clearance vetting required for access to classified information, and says the company is, to his knowledge, the only AI firm where cleared personnel actually built the models designed to run in classified environments.
What Happens Next
The case heads to court on Tuesday, where Judge Lin will weigh whether the supply-chain risk designation amounts to government retaliation for Anthropic's public stance on AI safety — a violation of the First Amendment, as the company argues — or a legitimate national security decision, as the government insists. The Pentagon's position is that Anthropic's refusal to permit all lawful military uses was a business decision, not protected speech. Anthropic's filings suggest the government's own actions tell a very different story. The outcome could set a precedent for how far the federal government can go in punishing technology companies whose values conflict with the administration's agenda.







