Microsoft has filed a formal court brief warning that the Pentagon’s decision to label Anthropic a supply-chain risk could have devastating cascading effects across the defense technology ecosystem. The filing, submitted to a federal court in San Francisco, called for a temporary restraining order and was supported by separate filings from Google, Amazon, Apple, and OpenAI. The coordinated industry response underscores how broadly the impact of the Pentagon’s designation is expected to be felt.
The Pentagon’s decision to blacklist Anthropic came after the AI company refused to allow its technology to be used for mass surveillance of American citizens or to control autonomous lethal weapons during negotiations over a $200 million contract. Defense Secretary Pete Hegseth declared Anthropic a supply-chain risk and the company’s existing government contracts began to be cancelled shortly thereafter. The designation, never before used against a US company, effectively bars Anthropic from federal work.
Microsoft, which uses Anthropic’s AI tools in military systems it supplies to the federal government, has a direct financial and operational interest in the outcome of this case. The company is a shareholder in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and maintains additional agreements with defense, intelligence, and civilian agencies. Microsoft publicly urged the government and technology sector to work together to ensure access to advanced AI while preventing its misuse for surveillance or warfare without human control.
Anthropic’s legal filings argue that the supply-chain risk designation is an unconstitutional attack on its free speech rights, as the label was applied in apparent retaliation for the company’s public AI safety advocacy. The company stated in court documents that it genuinely does not believe Claude is ready to safely support autonomous lethal operations, which is why it sought the protective clauses in the original contract. Anthropic filed cases simultaneously in both California and Washington DC to maximize its legal options.
The case is unfolding as Congress seeks answers about whether AI played a role in a deadly US military strike in Iran that reportedly killed over 175 civilians at a school. Lawmakers have submitted formal questions to the Pentagon about AI’s role in targeting decisions and the extent of human oversight. The intersection of this tragedy with Anthropic’s legal challenge has placed AI governance at the very center of American national security policy.