US Sets Strict AI Contract Rules Amid Pentagon-Anthropic Dispute

The Trump administration has proposed new guidelines for civilian AI contracts, requiring companies to allow legal use of their models. This comes amid a conflict between the Pentagon and AI firm Anthropic, which has been designated a supply-chain risk, leading to its technology being banned for military use.


Devdiscourse News Desk | Updated: 07-03-2026 06:55 IST | Created: 07-03-2026 06:55 IST
US Sets Strict AI Contract Rules Amid Pentagon-Anthropic Dispute

The Trump administration has introduced stringent rules for civilian artificial intelligence contracts, requiring any lawful use of AI models. This decision follows a dispute between the Pentagon and Anthropic, which has resulted in the firm's technology being prohibited in military applications, as reported by the Financial Times.

According to the report, the Pentagon formally designated Anthropic a 'supply-chain risk,' barring the use of its AI technology in governmental military contracts. This development emerged after a prolonged disagreement regarding Anthropic's insisted safeguards, which the Defense Department criticized as excessively restrictive.

The guidelines, reviewed by the Financial Times, demand AI firms to grant the U.S. an irrevocable license for all legal purposes. They prohibit contractors from embedding partisan or ideological biases in AI data outputs, and require disclosure on whether models comply with non-U.S. regulations, as part of a broader initiative to enhance AI service procurement.

(With inputs from agencies.)

Give Feedback