In a bold legal move, Microsoft has submitted an amicus brief to a San Francisco federal court in defense of Anthropic, the artificial intelligence company currently locked in a bitter dispute with the US Department of Defense. Microsoft contended that a temporary restraining order was necessary to shield the many suppliers and government contractors whose operations are built on Anthropic’s technology. The filing signals how deeply intertwined the commercial AI sector has become with national defense infrastructure.
Anthropic launched two simultaneous lawsuits against the Pentagon after it was officially designated a supply-chain risk, a label that has never before been applied to an American company. The company argued this designation amounted to ideological retaliation for its public stance on AI safety, particularly its refusal to allow its Claude model to be used for mass domestic surveillance or autonomous lethal weapons systems. The Pentagon’s chief technology officer publicly stated there was no chance of renegotiating with Anthropic following the designation.
Microsoft’s relationship with the US military is extensive and long-standing. The company is a partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and has signed several additional software and enterprise service agreements worth billions more. A statement from Microsoft emphasized that reliable access to cutting-edge technology and responsible AI use were goals that government, industry, and the public needed to pursue together.
The failed negotiations that sparked this conflict centered on a $200 million contract to deploy Anthropic’s AI on classified military systems at a time when the US was preparing military operations against Iran. Anthropic’s insistence on ethical usage restrictions proved to be a dealbreaker for Pentagon officials. Defense Secretary Pete Hegseth’s subsequent supply-chain risk designation led to the immediate cancellation of several of Anthropic’s existing government contracts.
This case arrives at a moment of intense scrutiny over the role of artificial intelligence in military operations. House Democrats have written to the Pentagon demanding answers about whether AI was used in a strike on an Iranian elementary school that reportedly killed at least 175 people. The convergence of these events has put AI ethics, military accountability, and corporate responsibility at the center of a national debate.
