Anthropic vs. the Pentagon: who controls military AI
February 27, 2026Trump today ordered all federal agencies to "immediately cease" using Anthropic's technology and the Pentagon designated the company a supply chain risk to national security, a category normally reserved for firms linked to foreign adversaries such as China or Russia. The designation bars any military contractor from doing business with Anthropic. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE," Trump wrote on Truth Social.
The dispute has been building for months. Anthropic signed a $200 million contract with the Pentagon last July and is the only AI company with its model deployed on the Defense Department's classified networks, through a partnership with Palantir. The Pentagon demanded that its Claude model be available for "all lawful purposes" without restrictions. Anthropic drew two red lines: Claude would not be used for mass surveillance of American citizens or for fully autonomous weapons. CEO Dario Amodei said Thursday his company "cannot in good conscience accede to their request."
Pentagon chief technology officer Emil Michael called Amodei a "liar" with a "God-complex" on X and argued those uses are already banned by law, but the military will not put restrictions in writing at the request of a private company. "At some level, you have to trust your military to do the right thing," he said. Defense Secretary Pete Hegseth described Anthropic's stance as "a master class in arrogance and betrayal."
A telling detail: Elon Musk's xAI has already signed a classified contract under the same terms Anthropic rejected. OpenAI and Google are negotiating similar arrangements, though Sam Altman said today his company shares Anthropic's "red lines." Employees at Google, Microsoft and Amazon have also asked their companies to resist the Pentagon's unrestricted access demands.
The supply chain risk designation hurts more than the lost contract itself. It means any company working with the US military must prove it has no ties to Anthropic, which could erode the firm's corporate client base, many of whom hold government contracts. For a company valued at $380 billion planning to go public this year, it is a serious blow. But the fundamental question is not commercial, it is strategic: can a Silicon Valley firm dictate how AI is used in war? The answer, as became clear today, is no.
Originally written in Spanish. Translation by myself.