Pentagon: 8 classified AI contracts signed with OpenAI, Google, Microsoft, AWS, Nvidia, SpaceX, Oracle and Reflection — Anthropic permanently excluded
On May 1, 2026, the Pentagon finalized AI agreements for IL6/IL7 classified networks with eight tech giants, deliberately excluding Anthropic. The $200M Anthropic contract is dead. Full breakdown of the military, geopolitical and ethical stakes.

On May 1, 2026, the Department of Defense (Pentagon) finalized a sweeping set of AI agreements for classified IL6 and IL7 networks with eight tech giants: OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, Oracle and the startup Reflection AI. According to CNN Business and Breaking Defense, these new contracts replace the $200 million agreement signed with Anthropic in July 2025 — a contract that had made Anthropic the first AI lab to operate on Pentagon classified systems. Anthropic is permanently excluded. The U.S. military AI market just flipped.
Context: why Anthropic was ejected
To grasp the scale of the shift, recall the timeline. Anthropic was originally the Pentagon's reference AI provider. The $200M contract from July 2025 confirmed that. But the breakdown accelerated in late 2025 over a very specific point: the usage clause.
The Trump administration, through the DoD, wanted Claude to be usable "for all lawful purposes" — including autonomous weapons and mass surveillance. Anthropic refused. As we analyzed in the March 2026 Anthropic Pentagon ban, Dario Amodei's stance was consistent: Anthropic's Usage Policy explicitly prohibits using Claude for autonomous weapons systems or untargeted surveillance. There was a reconciliation attempt in April, notably via the Mythos partnership, but it failed.
On May 1, 2026, the Pentagon ruled: the seven other tech giants (and Reflection AI) accept the clauses, so they win the contracts. Anthropic doesn't sign, so loses everything.
The eight winners: who signs what
| Company | Contract type | Classification level |
|---|---|---|
| OpenAI | Frontier models, GPT-5.4 and Codex | IL6/IL7 |
| Gemini Enterprise, A2A agents | IL6/IL7 | |
| Microsoft | Foundry, Azure OpenAI Services | IL6/IL7 |
| Amazon AWS | Bedrock, Trainium 3 hosting | IL6/IL7 |
| Nvidia | NeMo, H300 GPU infra, NeMoClaw | IL6/IL7 |
| SpaceX | Starlink IL6/IL7, Grok via xAI partnership | IL6/IL7 |
| Oracle | Cerner Defense, OCI Government | IL6/IL7 |
| Reflection AI | Proprietary frontier model, tactical agents | IL6/IL7 |
Including Reflection AI is the surprise. This startup — founded by ex-DeepMind, valued at roughly $5B in early 2026 — enters the giants' courtyard directly. It's a sign the DoD wants to diversify away from incumbent big tech to keep negotiating leverage. As we noted in our coverage of NeMoClaw at Nvidia, Nvidia had paved the way for agents in classified environments since late 2025.
IL6 and IL7 levels: what it really means
Impact Level (IL) classifications, defined by the Defense Information Systems Agency (DISA), structure military AI deployment:
- IL2: non-sensitive public data
- IL4: controlled unclassified information (CUI)
- IL5: higher-sensitivity CUI, mission-critical systems
- IL6: SECRET classified information
- IL7: TOP SECRET classified information
The May 1 contracts grant IL6/IL7 access. Concretely, that means Claude (Anthropic) won't run on Secret/TS networks, but GPT, Gemini, Grok, and Reflection's frontier model will. For AI researchers at OpenAI or Google, that's a massive business opportunity — but also a non-trivial ethical challenge. Several engineers have reportedly already pushed back internally, notably at Google (where the 2018 Project Maven episode led to resignations).
Why 7 firms accepted what Anthropic refused
The ethical question is central. Why do seven giants sign a contract potentially covering autonomous weapons, where Anthropic refuses?
1. A different reading of the clauses
OpenAI, Google and Microsoft have more permissive Usage Policies than Anthropic, since an OpenAI revision in January 2024 removed the explicit ban on "military and warfare applications." OpenAI's current wording leaves more interpretive room, with the commanding officer or operator responsible for avoiding prohibited uses case by case. Anthropic's stance is stricter: the policy serves as a technical envelope, not just a moral guideline.
2. Revenue pressure
At $200M per contract (likely more for the May 1 deals — some observers estimate $300-500M per firm), the stakes are significant. For OpenAI, which has to fund its GPT-5.5 super-app and the Codex Atlas agent, this is revenue uncorrelated with consumer subscriptions. For Google, which is investing $40B in Anthropic, it's strategic compensation: Google plays both sides.
3. A regulatory precedent effect
With the new contracts, the DoD creates a U.S. legal precedent on military AI use. Signatories shape the rules. Anthropic, by staying out, loses that voice — a point openly criticized by several employees (internal rumors confirmed by Bloomberg).
Anthropic's case and Dario Amodei's stance
Dario Amodei has publicly reaffirmed his position: Anthropic's mission is to build safe AI, and certain military uses are incompatible with that mission. That's a stance the safety-research community also defends, and which we already covered in our Project Glasswing and Mythos analysis. Anthropic isn't anti-defense — the firm keeps civilian government customers — but refuses autonomous weapons systems and mass surveillance.
That position has a measurable business cost. According to internal numbers, the lost DoD pipeline is on the order of $3-5 billion over 5 years. To put it in perspective with the $40B ARR run rate Anthropic estimates, that's marginal — but not zero.
Reflection AI: the outsider that grabs an IL6/IL7
Reflection AI's entry into the classified-provider club is probably the most surprising development. Reflection was founded in late 2024 by ex-DeepMind staff with the ambition of building a frontier model focused on agentic reasoning for high-responsibility environments — finance, healthcare, defense. Their Series C (estimated at $800M at $5B valuation, early 2026) echoes the Cursor round at $50B: U.S. VCs are aggressively pushing on AI defense.
It's also a geopolitical signal. With growing concerns about AI sovereignty against China (cf. DeepSeek V4 raising at $1T and Tencent/Alibaba at $20B), the U.S. secures its military AI supply chain by including a domestic outsider (Reflection) alongside big tech.
Geopolitical implications
The shift goes beyond the U.S. Three consequences to track:
1. Europe positions itself
With the UK Sovereign AI Fund of £500M and the French push for a digital sovereignty inquiry commission, Europe accelerates its own sovereignty strategy. Mistral, AMI Labs (Yann LeCun), and likely BAE Systems / Thales are opening AI defense programs.
2. China and Russia rearm
Excluding Anthropic and signing massively with seven U.S. giants sends a clear signal to Beijing and Moscow: the U.S. integrates AI into its military chain. Expected response: acceleration of the PLA's Tiangong program and faster deployment of Yandex / Sber Cloud on Russian systems.
3. Defense AI startups boom
Including Reflection paves the way for a second tier of defense AI startups. Shield AI already raised $2B Series G, Anduril keeps expanding, and new entrants (Scout AI, Helsing, Hadrian) raise massive Series A rounds.
What it changes for AI developers and consumer apps
For developers and consumer apps, the direct impact is limited — but the global context shifts:
- Public models stay the same. GPT-5.4, Claude Opus 4.7, Gemini 2.5 Pro remain available via standard API
- Tech competition intensifies. With DoD contracts at $300-500M, OpenAI and Google get extra funding leverage to push the frontier (GPT-6, Gemini 3.5)
- Talent arbitrage shifts. As we analyzed in the Meta-Google brain drain to startups, some safety researchers who don't want to work on defense will migrate to Anthropic or new structures
For consumer AI app publishers, the message stays consistent: AI product monetization runs through civilian users, not the DoD. The chat SDK and native ad formats like Idlen's remain the fastest path to turn an AI app into a profitable product, without depending on government procurement cycles.
Conclusion: the ethical fracture in U.S. AI is now consummated
May 1, 2026 marks an ethical inflection point in the U.S. AI industry. For the first time since Project Maven in 2018, the seven largest U.S. AI labs (Anthropic excluded) have collectively decided that classified military contracts are worth bending the Usage Policies. That's a precedent that will shape the next 10 years.
Anthropic, by staying out, pays a business price. But gains an ethical positioning that could, over time, become a massive commercial advantage in healthcare, education, and European customers sensitive to sovereignty. As Dario Amodei wrote in his latest internal memo: "Our brand is our policy." The next months will tell whether that discipline pays off in IPO valuation — a decisive topic for Anthropic's path to an October 2026 IPO.
To continue on military and geopolitical AI stakes, see our coverage of 100,000 AI agents deployed at the Pentagon, our breakdown of the initial Trump-Anthropic ban and our complete guide to monetizing AI apps.


