Anthropic V The US Military: What This Public Feud Says About The Use Of AI In Warfare
The origin of this disagreement dates back months, amid repeated criticisms from Donald Trump's AI and crypto“czar”, David Sacks, about the company's supposedly woke policy stances.
But tensions ramped up following media reports that Anthropic technology had been used in the violent abduction of former Venezuelan president Nicolás Maduro by the US military in January 2026. It was alleged this caused discontent inside the San Francisco-based company.
Anthropic has denied this, with company insiders suggesting it did not find or raise any violations of its policies in the wake of the Maduro operation.
Nonetheless, the US secretary of defense, Pete Hegseth, has issued Anthropic with an ultimatum. Unless the company relaxes its ethical limits policy by 5.01pm Washington time on Friday, February 27, the US government has suggested it could invoke the 1950 Defense Production Act. This would allow the Department of Defense (DoD) to appropriate the use of this technology as it wishes.
At the same time, Anthropic could be designated a supply chain risk, putting its government contracts in danger. These extraordinary measures may appear contradictory, but they are consistent with the current US administration's approach, which favours big gestures and policy ambiguity.
At the heart of the dispute is the question of how Anthropic's large language model (LLM) Claude is used in a military context. Across many sectors of industry, Claude does a range of automated tasks including writing, coding, reasoning and analysis.
In July 2024, US data analytics company Palantir announced it was partnering with Anthropic to“bring Claude AI models... into US Government intelligence and defense operations”. Anthropic then signed a US$200 million (£150 million) contract with the DoD in July 2025, stipulating certain terms via its “acceptable use policy”.
These would, for example, disallow the use of Claude in mass surveillance of US citizens or fully autonomous weapon systems which, once activated, can select and engage targets with no human involvement.
According to Anthropic, either would violate its definition of“responsible AI”. Hegseth and the DoD have pushed back, characterising such limits as unduly restrictive in a geopolitical environment marked by uncertainty, instability and blurred lines.
Responsible AI should, they insist, encompass“any lawful use” of AI models by the US military. A memorandum issued by Hegseth on January 9 2026 stated:
The memo instructed that the term“any lawful use” should be incorporated in future DoD contracts for AI services within 180 days.
Anthropic's competitors are lining upAnthropic's red lines do not rule out the mass surveillance of human communities at large – only American citizens. And while it draws the line at fully autonomous weapons, the multitude of evolving uses of AI to inform, accelerate or scale up violence in ways that severely limit opportunities for moral restraint are not mentioned in its acceptable use policy.
At present, Anthropic has a competitive advantage. Its LLM model is integrated into US government interfaces with sufficient levels of clearance to offer a superior product. But Anthropic's competitors are lining up.
Palantir has expanded its business with the Pentagon significantly in recent months, giving rise to more AI models.
Meanwhile, Google recently updated its ethical guidelines, dropping its pledge not to use AI for weapons development and surveillance. OpenAI has likewise modified its mission statement, removing“safety” as a core value, and Elon Musk's xAI (creator of the Grok chatbot) has agreed to the Pentagon's“any lawful use” standard.
A testing point for military AIFor C.S. Lewis, courage was the master virtue, since it represents“the form of every virtue at the testing point”. Anthropic now faces such a testing point.
On February 24, the company announced the latest update to its responsible scaling policy –“the voluntary framework we use to mitigate catastrophic risks from AI systems”. According to Time magazine, the changes include“scrapping the promise to not release AI models if Anthropic can't guarantee proper risk mitigations in advance”.
Anthropic's chief science officer, Jared Kaplan, told Time:“We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead.”
Ethical language saturates the press releases of Silicon Valley companies eager to distinguish themselves from “bad actors” in Russia, China and elsewhere. But ethical words and actions are not the same, because the latter often entails a real-world cost.
That such a highly public spectacle is happening at this time is perhaps no accident. In early February, representatives of many countries – but not the US – came together for the third time to find ways to agree on “responsible AI” in the military domain. And on March 2-6, the UN will convene its latest conference discussing how best to limit the use of emerging technologies for lethal autonomous weapons systems.
Such legal and ethical debates about the role of AI technology in the future of warfare are critical, and overdue. Anthropic deserves credit for apparently resisting the US military's efforts to undercut its ethical guidelines. But AI's role is likely to be tested in many more conflict situations before agreement is reached.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment