Microsoft has drawn on its unique credibility as one of the Pentagon’s most trusted technology partners to stand up for Anthropic in an extraordinary confrontation over the future of AI governance in national security, filing a court brief in a San Francisco federal court that calls for a temporary restraining order against the supply-chain risk designation. The brief argued that the designation threatens both commercial and defense technology supply chains. Amazon, Google, Apple, and OpenAI have also stood up for Anthropic through a separate joint filing.
The extraordinary confrontation began when Anthropic refused to allow its Claude AI to be used for mass surveillance or autonomous lethal weapons during a $200 million Pentagon contract negotiation. Defense Secretary Pete Hegseth labeled the company a supply-chain risk after talks collapsed, and Anthropic’s government contracts began to be cancelled. The company filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s military credibility gives its intervention exceptional weight: the company integrates Anthropic’s technology into systems it provides to the US military and holds a share of the Pentagon’s $9 billion cloud computing contract. Additional agreements with defense, intelligence, and civilian agencies further solidify Microsoft’s standing. Microsoft publicly called for a path forward in which the government and technology sector cooperate to ensure advanced AI serves national security responsibly.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly foreclosed any possibility of renegotiation.
Congressional Democrats have separately asked the Pentagon whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school, demanding information about AI targeting and human oversight. Their inquiries are compounding the extraordinary pressure on the Pentagon from multiple directions. Together, Microsoft’s credibility-backed intervention, the industry coalition, and congressional scrutiny are creating an unprecedented accountability moment for the Pentagon’s approach to AI governance.
