Anthropic Rejects Pentagon Contract Changes, Claiming It Loosens Core AI Safety Policy

Anthropic rejects a revised Pentagon AI contract over safety concerns, amid controversy about the company softening its core AI safety policy and red lines on military uses. Dario Amodei - Instagram account

Anthropic has rejected the Pentagon's latest attempt to revise a major artificial intelligence contract, saying the proposed changes would weaken protections against using its technology for mass surveillance and fully autonomous weapons, even as the company faces criticism for loosening its own core AI safety rules.

Anthropic Turns Down Pentagon Offer

Anthropic is declining the Pentagon's new offer to amend a roughly 200 million dollar contract covering Claude, its AI system deployed on the US military's classified network.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that unless the company agreed to let Claude be used "for all lawful purposes," the Defense Department would terminate the deal and label Anthropic a "supply chain risk," a designation usually reserved for firms tied to foreign adversaries, according to CNN.

The revised language from the Pentagon was presented as a compromise meant to address Anthropic's concerns, but the company says the text was filled with legal caveats that could be used to override safeguards. In a statement, Anthropic said it could not accept terms that might open the door to wide‑scale surveillance of civilians or to AI systems controlling weapons without human oversight.

Amodei stressed that the company would not change its position under pressure from the US government. He said the Pentagon's threats "do not" alter Anthropic's stance and that the firm cannot "in good conscience" agree to the department's request.

Fight Over "All Lawful Purposes" and AI Red Lines

At the heart of the dispute is how far military users can go with Anthropic's technology. The Pentagon wants the flexibility to use Claude across a broad range of classified operations, as long as they are lawful under US and international rules.

Anthropic, however, has drawn red lines around using its AI for targeting, autonomous weapons control, and large‑scale monitoring of Americans, arguing current systems are not reliable enough and existing law does not clearly regulate AI‑driven surveillance.

People familiar with the talks say these red lines have been a sticking point for months, and the latest breakdown could lead to Anthropic losing both the contract and future Pentagon work. The dispute has also highlighted broader questions about how tech companies can set ethical limits while doing business with national security agencies, TechPolicy reported.

Anthropic's Safety Policy Shift Raises Questions

The contract fight is unfolding just as Anthropic revises its own internal safety framework, a move that critics say undercuts its public stance. The company recently replaced its two‑year‑old Responsible Scaling Policy, which had required Anthropic to pause training more powerful models if their capabilities surpassed the company's ability to control them safely. That explicit pause requirement has now been removed.

Anthropic argues that stopping development while less cautious rivals push ahead could actually make the world "less safe," by allowing more reckless actors to dominate the market. Instead of fixed rules that force a halt, the new policy is described as a flexible, nonbinding framework that can change as AI advances.

A company spokesperson told CNN the updated rules are meant to increase transparency and accountability. Anthropic has pledged to publish regular, detailed reports on risk‑mitigation plans and on the threat models and capabilities of all of its systems, saying fast‑moving AI research requires frequent revisions to its safety approach.

Debate Over AI Safety and Power

The timing of the policy change, coming days after Hegseth warned Anthropic to loosen its safety limits or risk blacklisting, has fueled criticism from advocacy groups and industry observers.

Some watchdogs say dropping the hard commitment to pause dangerous models shows why voluntary safeguards are not enough and why governments should write AI safety rules into law.

Anthropic maintains that its clash with the Pentagon proves it is willing to walk away from lucrative government work to preserve core safety principles on weapons and surveillance. But the company's new, more flexible internal policy has left many asking how firmly those principles will hold as competition for AI dominance intensifies, as per KVIA.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion