U.S. Tightens AI Contract Guidelines Amid Pentagon-Anthropic Conflict
The Trump administration has established stringent rules for civilian artificial intelligence contracts, demanding unrestricted legal use of models by the government. This comes during a dispute between the Pentagon and AI firm Anthropic, which was labeled a 'supply-chain risk' by the Department of Defense.
The Trump administration has introduced firm regulations for civilian AI contracts, requiring companies to allow any lawful use of their models, according to a Financial Times report. The move highlights ongoing tensions between the Pentagon and AI firm Anthropic, recently classified as a 'supply-chain risk' by defense authorities.
The conflict stems from a disagreement over Anthropic's safety measures, which the Department of Defense argues excessively limit government use. A reviewed draft of the guidelines specifies that AI groups must grant the U.S. government an irrevocable license for all legal purposes if seeking government contracts.
This guidance is part of a wider initiative to bolster AI services procurement across the government, mirroring potential Pentagon measures for its military contracts. As a result, the GSA has terminated Anthropic's OneGov agreement, affecting its availability to federal branches.
(With inputs from agencies.)
ALSO READ
US Sets Strict AI Contract Rules Amid Pentagon-Anthropic Dispute
US Imposes Tough AI Guidelines Amidst Anthropic Dispute
AI Showdown: Anthropic Faces Off with U.S. Military
U.S. Government Refuses Refund on Illegal Tariffs Despite Supreme Court Ruling
Anthropic's AI Revolution: Balancing Innovation and Employment Impact

