- Vulnerable U
- Posts
- Pentagon Takes First Step Toward Blacklisting Anthropic
Pentagon Takes First Step Toward Blacklisting Anthropic

Axios says the Pentagon asked two major defense contractors to provide an assessment of how dependent they are on Claude. That’s the early step you take when you are thinking about calling a vendor a “supply-chain risk,” a label usually used for companies from adversarial countries. Doing it to a leading American AI company that the military itself is already relying on would be a pretty insane precedent.
The Pentagon is described as impressed with Claude’s performance, but furious that Anthropic refuses to lift safeguards and let the military use it for “all lawful purposes.” Anthropic’s position is that it will not allow Claude to be used for mass surveillance of Americans or to develop weapons that fire without human involvement.
Axios reports that during a tense meeting, Hegseth gave Anthropic CEO Dario Amodei a deadline at 5:01 p.m. Friday. After that, the administration would either invoke the Defense Production Act to compel Anthropic to tailor Claude to military needs, or declare Anthropic a supply-chain risk.
I cannot get over how those threats sit next to each other. Is Claude so essential that you will reach for an emergency style statute to force access, or is it such a risk that nobody in government should be allowed to use it?
More Leverage Than Policy
The supply-chain threat is the big stick, and the Defense Production Act threat is the bigger stick. Both are meant to force one outcome: remove the safeguards, sign the “all lawful use” language, and stop acting like you have terms of use that constrain what government can do.
Axios also notes the Pentagon plans to reach out to all major primes to assess their exposure. That implies Claude is already embedded in defense workflows even when it is not a direct contract line item. The Pentagon is checking how badly it would hurt its own contractors if it followed through on the blacklist threat.
That is the meme here: it’s the ol’ “I consent! isn’t there somebody you forgot to ask?” “We are going to label them a supply-chain risk.” Then, “wait, never mind, we need to ask Lockheed and everyone else how much we would break if we did that.” It’s a self-inflicted dilemma created by trying to bully a vendor you are already dependent on.
At the same time, other vendors appear more willing to say yes. Axios mentions xAI agreed to classified use under the “all lawful purposes” framing, and that Google and OpenAI are also in negotiations, with the Pentagon insisting they would have to lift safeguards to get those contracts. This becomes a market shaping mechanism: Comply and you get deals, refuse and you get threatened. Who is shocked that Elon would kiss the ring?
Anthropic’s Multi-Front Battle
The threats from Hegseth comes as Anthropic fights a battle on another front, claiming it identified industrial-scale campaigns by three Chinese labs (DeepSeek, Moonshot, and MiniMax) trying to distill Claude by hammering it with millions of requests. Anthropic cites 16 million exchanges routed through about 24,000 fraudulent accounts. The basic idea of distillation is simple: train a weaker model on the outputs of a stronger one. Labs do it legitimately on their own models to make smaller, cheaper versions. The allegation here is that competitors did it to pull capabilities out of Claude at a fraction of the cost.
There is also a geopolitical subtext. If certain Chinese labs cannot access the newest chips at scale, the incentive to extract capability from frontier models goes up. And there is an irony that is hard to ignore:
The whole industry is built on scraping enormous amounts of public content that the labs did not own, and now the labs are furious about someone copying the copier.