Emil Michael, the Under Secretary of Defense for Research and Engineering, appeared on CNBC on Thursday, where he faced questions about the Pentagon’s designation of Anthropic as a supply chain risk. Michael tried to make his case that Anthropic was a unique threat to American national security, an utterly confusing stance when the U.S. military is still using Anthropic’s AI model Claude.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael argued on CNBC.
The “soul” is a reference to the “Soul overview” guiding document baked into Claude that influences its interactions with users and their “personality.” Late last year, a version of the document was discovered by a user and briefly made headlines. It included guidance like “being truly helpful to humans is one of the most important things Claude can do for both Anthropic and for the world.” The AI startup subsequently confirmed the legitimacy of the document, saying it was a work in progress, and in January, the document was released in full as “Claude’s Constitution.”
The Pentagon gave Anthropic an ultimatum in late February that it would have to lift guardrails that prohibit Claude from being used in mass domestic surveillance and fully autonomous weapons or face being labeled a supply chain risk. Anthropic refused, and the Pentagon gave the company that designation, something that’s never been used against a U.S. company before.
DoD official Emil Michael on designating Anthropic a supply chain risk — “Their model has a soul, a ‘constitution’ — not the US Constitution. The other day their model was ‘anxious’ and they believe it has a 20% chance of being sentiment and having its own ability to make… pic.twitter.com/D1aPSJYTaJ
— Aaron Rupar (@atrupar) March 12, 2026
Anthropic is now suing, and the Pentagon is taking the next six months to get Claude out of its system. CNBC’s Andrew Ross Sorkin asked Michael about the contradiction in the fact that the U.S. military was claiming Claude was a dire threat to national security while stopping short of an immediate decoupling.
“If, in fact, this was and is a genuine supply chain risk, wouldn’t you be removing this service immediately from all of the Pentagon?” Sorkin asked. The CNBC host then pointed out that Claude was still being used “as we speak” in Iran, and there are reports from Reuters and elsewhere that the Pentagon and other parts of the U.S. government were exploring an “extension period” to use it for even longer.
“If it was a genuine supply chain risk, and this was not part of a larger negotiation, what some people think is a political shakedown, why wouldn’t you remove it immediately?” Sorkin asked.
Michael has been extremely hostile to Anthropic CEO Dario Amodei, according to several reports, but insisted that it wasn’t punitive and that it would take time to get Claude out.
“If they had never entered the department systems, it wouldn’t be an issue on this, and they could move on. But they’re embedded in our systems. And as you know, Andrew, you can’t just rip out a system that’s deeply embedded overnight.”
Michael went on to argue that “we’re watching it very closely, making sure we have control so that there’s no way that the model could be corrupted or that the insider threat could do anything through it,” referring to the possibility that Claude could do something dangerous and against the military’s interests.
“But the supply chain threat is real, but we also have to move off it, and that doesn’t happen overnight. This is not just Outlook, where you could delete it from your desktop,” Michael argued.
That argument might make sense to some people who don’t think about it too hard. But it skirts around the fact that it’s not what a supply chain risk designation is for. When the U.S. national security establishment calls Chinese hardware manufacturers like Huawei a supply chain risk, they’re worried that the Chinese government could have access to U.S. devices through a backdoor of some kind.
When a Swiss cybersecurity company with Russian ties received the supply chain risk designation from the Office of the Director of National Intelligence (DNI) in 2025, it was similarly over a concern that data protected by the company could be compromised. The U.S. government didn’t want any data held by the intelligence community to fall into the wrong hands, and any offending software needed to be reported within three days and removed “promptly.”
If something that posed a legitimate risk to U.S. national security was sitting on American military hardware right now, it would be ripped out immediately, and they’d start using pen and pencil to fight this war, not gamble with a real potential threat. Or at least that’s what an intelligent military would do.
The difference with Anthropic is not that they actually pose a risk; it’s that they’ve placed guardrails that won’t allow two very specific use cases that the company believes would create a dangerous environment or violate the U.S. Constitution.
Anthropic has said that the reason the company doesn’t want autonomous weapons isn’t even out of principle, it’s that they don’t believe Claude can safely do that. But Hegseth and the geniuses at the Pentagon don’t care and are going to punish Anthropic for not bending the knee.
Read the full article here
