Military AI Controversy: How Anthropic’s Claude Sparked an Ethics Debate With the Pentagon

On July 14, 2025, the American company Anthropic, owner of the language model Claude, published an official announcement on its website in celebratory terms. It revealed that the company had signed a $200 million contract for two years with the Chief Digital and Artificial Intelligence Office (CDAO) of the U.S. Department of War. Anthropic described the deal as “a new chapter in its commitment to supporting U.S. national security,” emphasizing that its models were designed to be “reliable, interpretable, and steerable” in the most sensitive government environments.
What appeared in July 2025 as an institutional culmination of years spent building an identity around “responsible AI” became the subject of intense ethical debate on February 28, 2026. Reports indicated that the Claude model—through the Maven Smart System operated by Palantir and used within classified military networks—had been employed to generate targeting lists containing precise GPS coordinates for a large number of targets within a 24-hour period. This reportedly occurred in the context of U.S.–Israeli strikes on Iran.
At the very moment Anthropic refused to expand the contract’s terms and entered into a confrontation with the Pentagon, app stores were recording a 295% surge in ChatGPT uninstallations, while Claude—the same model at the center of the military controversy—topped the U.S. App Store rankings for the first time in its history.
This layered contradiction represents more than a public-relations crisis for a rising technology company. Rather, it reveals a long-postponed question that has lingered since artificial intelligence entered the military sphere: Can the principle of “responsibility” survive when its publicly declared red lines collide with national security imperatives? And do the AI ethics frameworks promoted by technology companies truly represent fixed moral boundaries?
Roots of the Crisis
The controversy surrounding Claude in late February 2026 cannot be understood without revisiting the contradictions embedded in Anthropic’s July 2025 announcement. In hindsight, the official statement already contained the seeds of the crisis within its own wording. These structural tensions appear across several levels:
1. The Language of Principles Serving the Contract
The July 2025 announcement framed the military partnership as stemming from the belief that “the most powerful technologies carry the greatest responsibilities.” Integration with the Pentagon was presented not as a contradiction of Anthropic’s values but as an extension of them.
Yet this very rhetoric later amplified the shock. A company that built its credibility on transparency was simultaneously signing an agreement to integrate its technology into classified military networks beyond any meaningful public oversight.
2. Selective Transparency
Anthropic acknowledged in July 2025 that Claude had been integrated into military workflows on classified networks via Palantir, but it did not disclose the nature or limits of those tasks.
The operational reality emerged earlier than many realized. On January 3, 2026, Claude was reportedly used during a classified military operation that led to the capture of Venezuelan President Nicolás Maduro, marking the first documented use of a frontier AI model in such an operation.
The event passed largely unnoticed because the public still lacked awareness of Claude’s role inside military networks.
This type of selective transparency—acknowledging presence without revealing function—created a wide perception gap between what the public understood as “decision support” and what later emerged in February 2026 as large-scale automated target generation. Technically, both phrases describe the same process, yet their ethical implications differ dramatically.
3. Red Lines as Negotiation Tools
Anthropic refused the Pentagon’s request to grant the U.S. military unrestricted authority to use Claude for all legal purposes. The company insisted on two conditions:
- The model must not be used for mass surveillance of U.S. citizens.
- It must not be used in fully autonomous weapons systems without human oversight.
However, Anthropic CEO Dario Amodei acknowledged that the company never objected to any specific military operations. This admission suggested that the disagreement was not about the military use of AI itself, but rather about how the contract was structured.
Such a distinction narrows the scope of any principled stance, reducing it to its practical negotiating limits.
4. The Official Name as a Revealing Symbol
On March 5, Anthropic published a crisis statement titled “Where things stand with the Department of War.”
This wording was not rhetorical. The official correspondence the company received from the Pentagon on March 4 was signed under the name “Department of War.”
The designation had been formally reinstated on September 5, when President Donald Trump signed an executive order reviving the title as an official secondary name. The department even changed its website to war.gov, and Defense Secretary Pete Hegseth reportedly placed a “Secretary of War” sign on his office door.
The significance lies not in Anthropic’s choice of wording but in the fact that the institution Claude had been contracted to support with a $200 million agreement now officially signed its letters as the Department of War. This underscores the widening gap between the discourse of “responsible AI” and the operational context in which the technology was actually deployed.
Major Dilemmas
The Claude controversy extends far beyond a contractual dispute between Anthropic and the Pentagon. It exposes deeper structural dilemmas that have existed beneath the rhetoric of responsible AI since the beginning of the partnership.
1. Human Oversight in the Age of Machine Speed
The Maven Smart System, powered by Claude, reportedly enabled a single artillery unit to perform the equivalent work of 2,000 staff members, managed by a team of only 20 people.
Defense analyst Paul Scharre of the Center for a New American Security described the shift as follows:
“Artificial intelligence enables the military to build targeting packages at machine speed rather than human speed.”
The core dilemma emerges here. When algorithmic recommendations operate at a speed that human cognition cannot meaningfully match, does human oversight remain genuine oversight—or merely a procedural signature approving decisions the algorithm has already effectively made?
2. Security Classification as Political Pressure
Following the dispute, the Pentagon reportedly labeled Anthropic a “national security supply chain risk.” Historically, such a designation had been reserved for Chinese or Russian companies.
The political context, however, suggested a different motive. On Truth Social, President Trump wrote:
“The United States will never allow a radical left company to dictate to our great military how to fight and win wars.”
He described Anthropic as “a radical left AI company run by people who know nothing about the real world,” and ordered federal agencies to stop using its technology.
In response, CEO Dario Amodei clarified on March 5 that the classification had narrow legal scope and only affected contracts directly tied to the Department of War. This indicates that the designation functioned less as an objective security evaluation and more as a politically motivated negotiating lever, undermining the credibility of the regulatory framework itself.
3. Users Voting on Positions, Not Facts
The app-store data from February 28 reflected more than emotional reactions.
- One-star reviews for ChatGPT increased by 775% in a single day.
- Claude reached #1 in the U.S. App Store the same day.
Within 48 hours, the QuitGPT movement reported that over 1.5 million people had canceled subscriptions or joined the boycott.
The phenomenon even extended into pop culture when singer Katy Perry publicly announced she had subscribed to Claude, contributing to a broader celebrity wave that turned AI usage into a symbolic identity statement.
Yet the irony was stark: Claude was simultaneously operating within the military environment while users were flocking to it as an ethical alternative.
This suggests that public moral judgments are shaped by declared positions rather than invisible classified operations, raising profound questions about the nature of ethical consumer choice in an era where critical information is legally hidden.
4. Silicon Valley and the Military-Industrial Complex
This crisis cannot be separated from the broader transformation underway in the U.S. technology sector.
During a White House dinner in September, leaders from major Silicon Valley companies pledged more than $1.2 trillion in investments in AI infrastructure.
The integration between technology companies and national security institutions is not accidental but follows a structural logic: what might be called “flexible specialization.”
Instead of a single universal AI model dominating all sectors, companies increasingly develop specialized capabilities tailored to different domains—including the most strategically valuable one: national security and warfare.
Claude exemplifies this approach. While remaining a general-purpose model, it gained distinctive advantages in highly classified operational environments.
Anthropic’s own March 5 statement offered the most revealing insight. After being labeled a security risk, Dario Amodei declared that the company was willing to continue providing its models to the Department of War without commercial compensation during the transition period, adding that:
“Anthropic shares more common ground with the Department of War than divides us.”
The statement also clarified that Claude’s objections were limited to two narrow cases:
- Fully autonomous weapons
- Domestic mass surveillance
There was no objection to operational targeting lists.
This reveals that what appeared to be a principled stance was essentially a dispute over contractual terms, while the company remained structurally integrated into the military ecosystem.
Conclusion
The controversy surrounding Claude and the Pentagon is not simply a contractual dispute between a technology firm and a government agency, nor merely a fleeting ethical scandal in the digital public sphere.
For the first time in history, millions of users are casting daily consumer votes on a complex moral dilemma traditionally confined to academics and policymakers.
The question remains: Do the ethics of artificial intelligence end where national sovereignty begins?
While Anthropic has attempted to frame itself as a guardian of principles rather than a negotiator who lost a round, OpenAI moved quickly to fill the contractual gap, and the Pentagon reportedly continued operating what it labeled a security risk for six more months.
The deeper issue remains unresolved. In an era where targeting algorithms operate at machine speed rather than human speed, can the concept of “responsible AI” still function when the military instrument itself is the language model?



