cognitive cybersecurity intelligence

News and Analysis

Search

US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban

US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban

The U.S. Department of Defense deployed Anthropic’s Claude AI during Operation Epic Fury, a joint offensive with Israel against Iran on February 28, just hours after President Trump designated Anthropic as a national security “supply chain risk” and ordered all federal agencies to cease use of its AI systems.

On February 28, 2026, American and Israeli forces launched a coordinated strike campaign codenamed Operation Epic Fury/Roaring Lion against key Iranian government installations, including nuclear facilities and strategic military infrastructure.

According to reports from The Wall Street Journal, Axios, and Reuters, the U.S. Central Command leveraged Anthropic’s Claude AI model during the operation for intelligence assessments, target identification, and battlefield simulations.

The disclosure is significant given that the strikes were executed mere hours after the Trump administration formally declared Anthropic a supply-chain risk, a national security-level designation that was intended to immediately curtail the company’s access to defense contracts.

Defense officials reportedly acknowledged that a full technical withdrawal from Claude was operationally infeasible on such short notice, as Claude remains the only AI system currently embedded within certain classified U.S. government networks.

US Military Reportedly Used Claude

The clash between Anthropic and the Pentagon accelerated following the January 2026 operation that led to the capture of Venezuelan President Nicolás Maduro, a mission during which Claude was also deployed, according to Axios. That revelation ignited the showdown between the AI developer and U.S. defense leadership.

The core dispute centers on Anthropic’s acceptable use policy, which includes hard prohibitions against using Claude for:

Autonomous weapons systems

Mass surveillance of U.S. citizens

Pentagon officials pushed for unrestricted use of the model under the argument that operational realities could not be constrained by a contractor’s commercial ethical policies.

On Thursday, February 26, Anthropic CEO Dario Amodei publicly reiterated the company’s refusal to lift these restrictions, triggering escalating pressure from the Department of Defense.

On Friday, February 27, Secretary of Defense Peter Hegseth announced that Anthropic would be designated a supply-chain risk, but added that the company would be given up to six months to allow for a “seamless transition,” effectively acknowledging the Pentagon’s deep integration of Claude into classified infrastructure.

Roughly four and a half hours after the Pentagon’s announcement, OpenAI CEO Sam Altman posted on X that his company had reached an agreement with the Department of War to deploy its AI models within classified military networks.

Altman noted that the Pentagon demonstrated “deep respect for safety” during negotiations, agreeing to terms that bar the use of OpenAI models for domestic mass surveillance and autonomous weapons systems, nearly identical to those Anthropic had sought.

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.

AI safety and wide distribution of…— Sam Altman (@sama) February 28, 2026

Altman further announced that OpenAI would station dedicated safety engineers inside the Pentagon to monitor model behavior, and called on the Department of War to extend equivalent terms to all AI vendors operating in defense environments.

Anthropic has publicly vowed to challenge the “supply-chain risk” designation in court, arguing that its ethical safeguards are not discretionary restrictions but fundamental safety commitments required before deploying frontier AI in high-stakes military contexts.

The company’s position reflects a broader tension across the AI industry where developers seek to maintain usage guardrails but government clients demand unrestricted operational latitude.

Defense One reported that replacing Anthropic’s Claude within Pentagon infrastructure could take three to six months, given its deep integration into classified systems, a timeline that effectively guarantees continued use through much of mid-2026.

Operation Epic Fury may mark the first publicly confirmed instance where AI-assisted military targeting occurred in direct defiance of an active executive prohibition, setting a critical precedent for how AI ethics, defense procurement, and national security policy will intersect in an era of AI-enabled warfare.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban appeared first on Cyber Security News.

Source: cybersecuritynews.com –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts