Anthropic vs the Pentagon: a defining contest over who sets the rules for AI
There is a quote popularly attributed to Russian revolutionary Vladimir Lenin that there are decades where nothing happens; and there are weeks where decades happen. This has felt like such a week.
In the context of the ongoing conflict in Iran and heightened tensions between the Trump administration and its traditional European allies, you would be forgiven for missing the news last week that the Pentagon designated AI giant Anthropic as a ‘supply chain risk’. Though the decision comes with a six-month grace period, it effectively bars any federal agencies and any company that does business with the US military - including the likes of Nvidia, Amazon, Google - from using Anthropic’s technology as part of any work with the Department for War.
Though the Pentagon’s decision was the natural consequence of weeks of very public posturing after reaching an impasse with Anthropic, the significance of the decision and what it means for the future of AI regulation should not be understated.
How did we get here? In July 2025 the Pentagon awarded $200m‑ceiling agreements to four “frontier” AI companies, with Anthropic’s Claude the first model cleared for use on classified networks. That honeymoon ended as the Department pressed for an “all lawful purposes” standard, effectively replacing company guardrails with government policy. Dario Amodei, Anthropic CEO, had made very clear that his two red lines were the use of Claude for mass surveillance of US citizens, and in the deployment of fully autonomous weapons – weapons able to deploy themselves without any human input. After no movement from either side, the decision to designate Anthropic a supply chain risk was taken.
It should not be understated how deeply personal this whole affair has become. Taking to Truth Social to announce the decision, President Trump attacked the “leftwing nutjobs” at Anthropic, describing the company as “radical left” and threatening major civil and criminal consequences to follow. Meanwhile Secretary of War Pete Hegseth spoke of Anthropic’s “masterclass in arrogance and betrayal” while accusing the company of trying to effectively control the decision-making process of the US military.
Meanwhile, the main beneficiary in all of this was OpenAI, which was less squeamish about its red lines and gladly signed the $200m deal with the Department of War. OpenAI boss, Sam Altman, was a former colleague of Amodei now turned competitor, with the latter leaving to set up Anthropic after a series of disagreements, including over AI safety.
While negotiations are reportedly still ongoing between US officials and Anthropic, it is significant that the Pentagon continues to use Anthropic as part of its operations in Iran.
At the heart of this saga is the core question of who sets the terms of use for AI.
On the one hand, governments ultimately have the democratic accountability to take decisions it deems in the national interest, including on the use of new and emerging technologies. On the other hand, no one knows this technology better than the big tech companies themselves, including what it can and can’t do. It is impossible to separate the current race towards AI super intelligence from the wider geopolitical tensions between the US and China, in what is likely to be the arms race which shapes the next phase of the 21st century. The art of politics and regulation is slow and deliberative by design and could not be more in contrast to the pace of AI development.
So, what next? Claude continues to be used by the US military in its actions in the Middle East and depending on the length of the conflict, this may necessitate a rolling back of that six-month window before Anthropic’s supply chain risk designation fully kicks in. From a business and PR perspective, Anthropic seems to be rallying well and has this week seen a surge in popularity with the public, with Claude dethroning ChatGPT on the Apple Store and Claude’s paid subscriber base more than doubling since the start of 2026.
This is a story about personalities, principles and red lines. In the era of AI, government and business will increasingly need to work together to navigate these, or else we risk seeing far more cases like this in the years to come.