Microsoft Challenges Pentagon Over Anthropic AI Blacklisting

Microsoft is supporting Anthropic in opposing a federal court's designation of the AI company as a national security threat, blocking its military work. The conflict arose after Anthropic denied unrestricted military use of its AI model, Claude, leading to a lawsuit against the Trump administration.


Devdiscourse News Desk | Sanfrancisco | Updated: 11-03-2026 20:05 IST | Created: 11-03-2026 20:05 IST
Microsoft Challenges Pentagon Over Anthropic AI Blacklisting
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.

Microsoft is backing Anthropic in a legal challenge against the Pentagon's decision to classify the AI company as a supply chain risk, effectively barring it from military contracts. The designation was applied after disputes over Anthropic's control over its AI model Claude.

The Trump administration ordered federal agencies to cease using Claude, a decision Microsoft claims could have significant economic repercussions. The tech giant argues that the designation is being used to settle contractual disputes inappropriately, and has requested a temporary suspension in federal court.

Microsoft also defends Anthropic's ethical stance against using AI for mass surveillance or autonomous warfare, aligning with broader American values. The Pentagon has not commented on the ongoing legal proceedings.

(With inputs from agencies.)

Give Feedback