【Washington, 】Pentagon ‘Close’ to Punishing Anthropic AI as ‘Supply Chain Risk’ Over Claude’s Military Use Terms: Report

Editor’s Note

This article examines a significant shift in U.S. defense policy, where internal debates over the ethical use of AI in combat are reportedly leading the Pentagon to view a major domestic AI company as a potential security risk.

U.S. Defense Secretary Pete Hegseth arrives to administer the oath to U.S. Army National Guard soldiers during a re-enlistment ceremony at the base of the Washington Monument in Washington, D.C., U.S., February 6, 2026. REUTERS/Jonathan Ernst
A Deepening Dispute Over AI in Warfare

A deepening dispute over how artificial intelligence should be used in modern warfare is now pushing the Pentagon towards an extraordinary step: treating a leading American AI firm as a security liability.

Potential Designation as a ‘Supply Chain Risk’

Defense Secretary Pete Hegseth is “close” to cutting business ties with Anthropic and designating the company a “supply chain risk”, a senior Pentagon official told Axios — a move that would effectively penalise not only Anthropic but also any contractor that relies on its technology.

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

The same official described the prospective decision to Axios in blunt terms.

An Unprecedented Move Against a Domestic Firm

Pentagon’s threatened “supply chain risk” designation for Anthropic AI is rarely deployed against domestic firms and is more commonly associated with entities linked to hostile foreign powers. Applying it to Anthropic would represent a sharp escalation in the Trump administration’s push to ensure AI systems used by the military are available without restrictive conditions.

Full article: View original |
⏰ Published on: February 17, 2026