Pentagon and Anthropic Clash Over AI Ethics in Military Operations
The Pentagon is potentially ending ties with Anthropic, an AI company, due to disagreement over restrictions on AI use in military operations. While the Pentagon wants unrestricted usage, Anthropic insists on ethical limitations. Talks continue amid rising tensions, as the U.S. military previously used Anthropic's AI model Claude in a major operation.
The Pentagon is reportedly considering severing its ties with Anthropic, an artificial intelligence company, over disagreements concerning the deployment of AI technologies in military contexts. According to an Axios report, Anthropic has placed certain restrictions on its AI models, which the Pentagon views as a hurdle.
Four leading AI companies, including Anthropic, OpenAI, Google, and xAI, are under pressure from the Pentagon to allow their models to be used for 'all lawful purposes.' Key military areas of interest include weapons development and intelligence collection. Despite these pressures, Anthropic has stood firm on its ethical constraints.
Anthropic's AI model, Claude, has been involved in significant operations, such as the capture of former Venezuelan President Nicolas Maduro. However, the company maintains that discussions with the U.S. government have focused primarily on usage policies, such as the non-use in fully autonomous weaponry and mass surveillance.
(With inputs from agencies.)
ALSO READ
OpenAI's Controversial Shift: Balancing Profit and Safety
Norway's wealth fund grapples with conflicting ethics expectations at home and abroad
UPDATE 1-Google targeted by EU over online ad price practices unfair to advertisers
OpenAI accuses DeepSeek of distilling US models to gain advantage, Bloomberg News reports
UPDATE 2-OpenAI says China's DeepSeek trained its AI by distilling US models, memo shows

