OpenAI says it has reached an agreement to deploy its models inside the U.S. Department of War’s classified network. Anthropic says it refused the Department’s “any lawful use” contracting standard because it would not remove two limits: no mass domestic surveillance of Americans, and no fully autonomous weapons.
This explainer focuses on why that shift is dangerous, and why OpenAI’s messaging around it should not be taken at face value.
What Anthropic refused, clearly and in writing
Anthropic says negotiations reached an impasse over “two exceptions” it requested to the lawful use of Claude: “the mass domestic surveillance of Americans and fully autonomous weapons.”
Anthropic says it supports national-security use “aside from the two narrow exceptions” and adds that, to its knowledge, those exceptions “have not affected a single government mission to date.”
It also explains why those two lines exist:
- On autonomous weapons: it says today’s frontier models are not “reliable enough” for “fully autonomous weapons,” and that allowing it “would endanger” warfighters and civilians.
- On surveillance: it says mass domestic surveillance “constitutes a violation of fundamental rights.”
In a separate statement, Anthropic goes further on what “mass domestic surveillance” can mean in the AI era. It warns that AI can turn scattered data into a “comprehensive picture of any person’s life” at scale, and argues that if parts of that are legal today, it’s because “the law has not yet caught up.”
That is the core point: “lawful” is not the same thing as “safe” or “rights-respecting.”
What the Department of War demanded: “all lawful purposes”
The Pentagon’s public stance, echoed across multiple reports, is that it needs model access for “all lawful purposes,” and it does not want a private company setting additional carve-outs.
The pressure campaign described in reporting is not subtle:
- Reuters reported the Pentagon threatened “drastic action,” including a “supply-chain risk” designation or using the Defense Production Act to force changes.
- Anthropic calls the “supply chain risk” label “unprecedented” for a U.S. company and says it will challenge any designation in court.
This matters because it creates a clear incentive structure: hold firm on guardrails, get punished. Agree to broad use, get rewarded.
What OpenAI signed, and why the public should treat the messaging as unproven
Reuters reports OpenAI reached an agreement to deploy its models on the Department of War’s classified cloud networks.
OpenAI’s CEO framed the agreement as aligned with strict safety principles, including “prohibitions on domestic mass surveillance” and keeping humans responsible for the use of force.
But the same week’s reporting also makes two things clear:
- The Pentagon has been insisting that leading AI labs accept “all lawful purposes” as the baseline standard, including in the classified environment.
- Coverage of the announcement notes that Altman’s framing was publicly challenged, including via a Community Note stating officials described OpenAI’s deal as permitting “all lawful purposes.”
If OpenAI wants the public to believe its deal includes real, enforceable prohibitions that the Pentagon was unwilling to give Anthropic, the burden of proof is on OpenAI to show what changed.
Right now, the contract language is not public. So “we put prohibitions into our agreement” is not auditable.
Why this is dangerous
1) “All lawful use” is a blank check in the surveillance era
Anthropic’s argument is straightforward: AI changes what surveillance can do, faster than law changes what surveillance should be allowed to do.
Anthropic points to a key mechanism: government purchase of commercially available personal data without a warrant, and AI’s ability to fuse “individually innocuous” data into something intimate and totalizing.
So when the Pentagon insists on “all lawful purposes,” the real-world effect is this:
- the boundary becomes whatever the government can justify as lawful in the moment, in a classified context, with limited visibility
- vendor guardrails become optional, or negotiable, or removable
That is exactly the scenario Anthropic refused to normalize.
2) Classified deployment reduces public accountability by design
Reuters reported the Pentagon is pushing to bring frontier models onto classified networks “without many of the standard restrictions” that companies apply to regular users.
Reuters also noted that classified networks can involve “mission-planning or weapons targeting,” and warned that model mistakes in that environment “could have deadly consequences.”
Even if a company promises safeguards, the classified setting makes it harder for outsiders to verify:
- what the model was used for
- what the model output
- what guardrails were enabled
- what oversight existed at the moment of use
This is not a theoretical problem. It is a predictable transparency gap.
3) This sets a market precedent: the most flexible vendor wins
This week’s pattern is simple:
- One company refused two carve-outs and was threatened, then designated a “supply chain risk.”
- Another company stepped in and secured the classified deployment agreement.
At the same time, Reuters reported OpenAI previously agreed to remove “many of its typical user restrictions” for the Pentagon’s unclassified genai.mil deployment, even if “some guardrails remain.”
The danger here is not just OpenAI’s deal. It’s the incentive it creates for every future deal:
- guardrails become a competitive disadvantage
- “trust us” becomes the sales pitch
- the public gets less clarity, not more
4) OpenAI’s credibility problem is the risk multiplier
A safety claim that cannot be audited is not a safeguard. It’s branding.
When public reporting repeatedly centers “all lawful purposes” as the Pentagon’s non-negotiable baseline, and OpenAI does not publish enforceable carve-outs, the safest assumption is that OpenAI accepted the broad standard and marketed it as stronger than it is.
That is why this episode matters for everyone who uses OpenAI products. It is a governance signal.
What real guardrails would look like
If OpenAI wants the public to believe it did not simply accept “all lawful purposes,” there are straightforward transparency moves it can make without dumping classified details:
- Publish a redacted excerpt or plain-language summary of the enforceable contract clause that restricts domestic surveillance and fully autonomous weapons.
- Publish the audit model: who monitors, what gets logged, what triggers escalation, and what happens when the customer demands an exception.
- Publish what “human responsibility for use of force” means operationally, not rhetorically.
Without that, the public is left with competing narratives in a domain where the default is secrecy.
What this means for you
This story is about power and incentives, not personality.
If you pay for AI tools, you are funding the governance choices behind them. If your red lines include mass surveillance and autonomous killing systems, you can act on that:
- Ask for proof, not assurances. If a company claims it prohibited something in a government contract, ask for the enforceable mechanism.
- Reward verifiable boundaries. In this episode, Anthropic put its limits in writing and accepted consequences.
- Push for procurement standards. The clean solution is not vendor-by-vendor promises. It’s baseline rules that apply to every contractor: explicit bans, independent oversight, and reporting requirements even in classified programs.
Safety is not what a CEO says. Safety is what a contract enforces.
Sources
- Anthropic (Feb 27, 2026): “Statement on the comments from Secretary of War Pete Hegseth.”
- Anthropic (Feb 26, 2026): “Statement from Dario Amodei on our discussions with the Department of War.”
- Reuters (Feb 28, 2026): OpenAI agreement to deploy models on Department of War classified networks.
- Reuters (Feb 24, 2026): Pentagon ultimatum options described, including Defense Production Act threat.
- Reuters (Feb 12, 2026): Pentagon pushing AI onto classified networks without standard restrictions; risks of errors in classified settings; OpenAI removing many restrictions for genai.mil.
- Axios (Feb 14, 2026): Pentagon insisting on “all lawful purposes” standard; classified and unclassified negotiations.
- Associated Press (Feb 27, 2026): reporting on “full, unrestricted access” demand and broader dispute timeline.
- Times of India (Mar 1, 2026): report noting a Community Note appended to Altman’s post referencing “all lawful purposes.”
