UPDATE 5-Pentagon designates Anthropic a supply chain risk

The action comes as the department continues to rely on Anthropic's technology to provide support for military ​operations, including in Iran, according to a person familiar with the matter. Claude likely is being used to analyze intelligence and assist with operational planning.


Reuters | Updated: 06-03-2026 09:16 IST | Created: 06-03-2026 09:16 IST
UPDATE 5-Pentagon designates Anthropic a supply chain risk

The Pentagon slapped a ‌formal supply-chain ​risk designation on artificial intelligence lab Anthropic on Thursday, limiting use of a technology that a source said was being used for military operations in Iran.

The "supply-chain risk" label, confirmed in a statement by Anthropic, is effective immediately and bars government contractors from using Anthropic's technology in their work for the U.S. military. But companies ‌can still use Anthropic's Claude in other projects unrelated to the Pentagon, CEO Dario Amodei wrote in the statement. He said the designation has "a narrow scope" and that the restrictions only apply to the usage of Anthropic AI in Pentagon contracts.

"It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts." The risk ‌designation follows a months-long dispute over the company's insistence on safeguards that the Defense Department, which the Trump administration calls the Department of War, said went too far. In his statement, Amodei reiterated that ‌the company would challenge the designation in court.

In recent days, Anthropic and the Pentagon have discussed possible plans for the Pentagon to stop using Claude, Amodei said in the Thursday statement. The two sides have talked about how Anthropic might still work with the military without dismantling its safeguards, he added. However, in a post on X late Thursday, Pentagon Chief Technology Officer Emil Michael said that there is no active Department of Defense negotiation with Anthropic.

Amodei also apologized for an internal memo published Wednesday by the tech news site ⁠The Information. ​In the memo, originally written last Friday, Amodei said Pentagon ⁠officials didn't like the company in part because "we haven't given dictator-style praise to Trump." The internal memo's publication came as Anthropic's investors were racing to contain the damage caused by the company's fallout with the Pentagon.

The Defense Department did not immediately return requests for comment. The action ⁠represented an extraordinary rebuke by the United States against an American tech company that was earlier than its rivals to work with the Pentagon. The action comes as the department continues to rely on Anthropic's technology to provide support for military ​operations, including in Iran, according to a person familiar with the matter.

Claude likely is being used to analyze intelligence and assist with operational planning. A Microsoft spokesperson said that the company's lawyers studied ⁠the designation and have concluded that: "Anthropic products, including Claude, can remain available to our customers—other than the Department of War—through platforms such as M365, GitHub, and Microsoft's AI Foundry."

Microsoft can continue to work with Anthropic on non-defense-related projects, the spokesperson added. Amazon, an investor in Anthropic and ⁠a ​significant customer of the company's Claude model, did not immediately respond to a request for comment outside regular business hours.

Palantir's Maven Smart Systems – a software platform that supplies militaries with intelligence analysis and weapons targeting – uses multiple prompts and workflows that were built using Anthropic's Claude code, Reuters earlier reported. Anthropic was the most aggressive of its rivals in courting U.S. national-security officials. But the company and the Pentagon have been at odds ⁠for months over how the military can use its technology on the battlefield. This conflict erupted into public view earlier this year.

Anthropic has refused to back down on bans for its Claude AI to ⁠power autonomous weapons and mass U.S. surveillance. The Pentagon has ⁠pushed back, saying it should be able to use this technology as needed, so long as it complies with U.S. law. The "supply-chain risk" label now gives Anthropic a status that Washington until now had typically used for foreign adversaries. Similar U.S. action was taken to remove Chinese tech giant Huawei from the Pentagon's supply ‌chains.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback