AI8 min readBy Paul Lefizelier

Amazon Puts Another $25 Billion Into Anthropic — and Locks In $100 Billion of AWS Compute Over Ten Years

On April 20, 2026, Amazon announces an additional investment up to $25B in Anthropic and a $100B commitment on AWS Trainium/Graviton. The deal secures 5 GW of compute for Claude and brings total Amazon investment to $33B. Anthropic has crossed $30B in run-rate revenue.

Amazon Puts Another $25 Billion Into Anthropic — and Locks In $100 Billion of AWS Compute Over Ten Years

On April 20, 2026, Amazon and Anthropic announced a massive expansion of their partnership. Amazon is investing an additional $5 billion immediately, with commitments up to another $20 billion tied to commercial milestones — a total ticket that pushes Amazon's cumulative investment in Anthropic to $33 billion, counting the initial $8B from 2023–2024. In return, Anthropic commits to spending over $100 billion in AWS services over ten years, including Trainium and Graviton chips. The deal secures up to 5 gigawatts of compute capacity to train and serve Claude — a tier that places Anthropic alongside Microsoft/OpenAI and Google/Alphabet for infrastructure. Anthropic confirms it has crossed $30 billion in run-rate revenue, up from roughly $9B at end of 2025.


The number that structures the deal: 5 gigawatts

The most important figure in the announcement isn't the $25B, it's 5 GW. For scale, one gigawatt equals the instant consumption of about 750,000 US homes. Five gigawatts dedicated to a single AI player is the equivalent of an entire nuclear plant allocated to Claude. For comparison, Microsoft announced a ~6 GW equivalent partnership with OpenAI in March 2026, and Google ran about 4 GW for Gemini.

In the fine print, the announcement schedules a phased deployment. Nearly one gigawatt total of Trainium2 and Trainium3 capacity will be online by end of 2026. The rest arrives across 2027–2028 as Amazon builds new dedicated AI datacenters. That phasing explains the conditional structure of the $20B: the money follows the compute, not the reverse.

MetricAnthropicOpenAIGoogle DeepMind
2026 secured compute5 GW (AWS)6 GW (Azure)4 GW (internal TPU)
Hyperscaler investment$33B (Amazon)$13B+ (Microsoft)Internal Alphabet
Run-rate revenue$30B$25Bnot disclosed
Compute stackTrainium + H100/BlackwellNvidia + Microsoft customTPU v6/v7 + Nvidia

Anthropic at $30B run-rate — the trajectory is dizzying

The most striking number in the announcement lives elsewhere: Anthropic's run-rate revenue has tripled in four months, from $9B at end of December 2025 to over $30B in April 2026. That isn't linear growth. It's an inflection. It's explained by three converging factors: Claude Opus 4.7 ramp-up, enterprise adoption of Claude Code, and the contribution of new product surfaces — Claude Design, Claude Dispatch, Cowork.

For context, Anthropic passed OpenAI on enterprise revenue in April 2026. The gap has widened. OpenAI sits at $25B run-rate, Anthropic at $30B. On the private-valuation side, Anthropic turned down offers at $800 billion — a 26x price/revenue ratio that is still below what OpenAI commands (32x).

Why Trainium, not just Nvidia

A key point of the deal: Amazon pushes Anthropic to consume Trainium at massive scale, its in-house AI chip. That's where the industrial logic lives. Trainium2 and Trainium3 are designed by Annapurna Labs (Amazon subsidiary) for a specific goal — match Nvidia H100/B200 performance on inference at lower manufacturing cost. Amazon saves tens of billions by not buying H100s from Nvidia; Anthropic gets compute at a preferential price.

The trade-off: Anthropic must invest engineering to port its models to the Trainium architecture. Trainium uses a custom ISA (Neuron), a software stack parallel to CUDA. That means rewriting low-level kernels, adapting training pipelines, testing every release on two hardwares. It's a meaningful R&D cost, but Anthropic gets in return a de-facto exclusivity on silicon optimized for its model family.

The parallel is direct with what Google does internally with TPU. The difference is Anthropic has no fab nor ASIC design — Amazon plays that role for it. The Annapurna-Anthropic co-development deal is the link that makes credible the "OpenAI + Nvidia vs Anthropic + Amazon vs Google + TPU" hypothesis in terms of a vertically integrated stack.

The financial backdrop: AWS recovers the stake through usage

The $100B committed on AWS isn't a blank check. It turns into recurring revenue for Amazon Web Services — roughly $10B per year over ten years. At that rate, Anthropic becomes one of the three or four largest AWS customers in absolute terms, and probably the largest AI compute customer in AWS history.

The financial plumbing is elegant: Amazon lends $33B to Anthropic (in equity), and Anthropic returns $100B in cloud spend. Theoretical return multiple 3x, but above all, Amazon locks the relationship. Anthropic can't leave for Google Cloud or Azure without colossal migration costs. The lock-in is mutual — Amazon needs Anthropic to justify its Trainium capex, Anthropic needs Amazon to avoid being strangled by compute cost.

Even Broadcom joins the table: the April deal between Broadcom and Anthropic for 3.5 GW of Google TPU rounds out the multi-vendor strategy. Bottom line, Anthropic now operates across three architectures simultaneously: Trainium (AWS), TPU (Google via Broadcom), and Nvidia H100/Blackwell. Maximum redundancy, compute-breakage risk minimized.

The macro signal: the compute war has entered its capex phase

The Amazon-Anthropic deal validates a broader move. Meta announced $115–135B in AI capex for 2026, nearly double the $72B of 2025. OpenAI wrapped its own 2026 compute deals around 6 GW on Azure. Google keeps deploying TPU v7 at a pace it doesn't publicly disclose.

What that means for the ecosystem: frontier-model barriers to entry have shifted. In 2024, the problem was access to Nvidia H100s. In 2026, the problem is access to gigawatts of electrical power and land to build datacenters. Hyperscalers are the only players that can mobilize those resources at scale. Anthropic chose Amazon rather than trying to build alone — a strategic call that limits future IPO options but eliminates capacity risk.

For second-tier AI startups, the message is harsh: training a frontier model from scratch in 2026 costs more than $10B in compute alone. Alternatives are either to buy compute at full price (Mistral model), or not train at all and call the three dominant labs' APIs. The middle ground disappears.

What Anthropic will fund with these $25 billion

Anthropic hasn't published an official breakdown, but the signals are enough to reconstruct the line items. Training the next frontier model (likely codename: successor to Claude Mythos announced for early May 2026): probably $8–10B. Large-scale Trainium inference to serve the $30B run-rate: $10–12B. Advanced R&D — safety, interpretability, capability research teams: $3–5B.

On top of that sits an envelope for product surfaces. Cowork, Claude Dispatch, Claude Design, and the upcoming native app builder integrated in Claude all consume dedicated infrastructure. End-user growth on web and mobile products forces Anthropic to resize inference clusters faster than its competitors.

What to watch over the next 60 days

The deal kicks off a series of open questions worth watching closely. First, the antitrust reaction. The FTC and the European Commission each have an open probe on "cloud + equity" partnerships between hyperscalers and AI labs. Microsoft-OpenAI paved the way, Amazon-Anthropic could face a similar review. The antitrust argument is that these deals lock compute access and crystallize the Azure/AWS duopoly on AI.

Second, Anthropic's IPO strategy. With $30B run-rate and an Amazon shareholder now dominant in economic weight, the IPO option becomes complex. A listing would force clarifying the Amazon-Anthropic relationship in the prospectus, and potentially unwinding certain exclusivity clauses. More likely: Anthropic stays private through 2027–2028.

Third, Google's response. Alphabet has no "equity + cloud" partnership comparable to an external lab. Gemini remains a 100% Google product. With Anthropic locked by Amazon and OpenAI by Microsoft, Google finds itself isolated — or forced to acquire an external AI lab to rebalance. Natural candidates would be Mistral, Cohere, or xAI, but none is on the market.


TL;DR:

  • Amazon invests $5B immediately + up to $20B conditional in Anthropic (cumulative $33B)
  • Anthropic commits to $100B AWS spend over 10 years
  • 5 GW of compute secured (Trainium2, Trainium3, Graviton)
  • Anthropic run-rate at $30B, tripled in 4 months
  • ~1 GW Trainium operational by end of 2026
  • The deal locks Anthropic on AWS and funds the next frontier model
  • Antitrust review likely — FTC tracking "equity + cloud" deals

April 20, 2026 will stand as the date the AI race entered its industrial phase. No more talking in tens of millions for a Series C round — now the language is gigawatts, decade-long commitments, silicon co-development pacts. Anthropic picked its side: Amazon, Trainium, and explosive growth in exchange for structural lock-in. The bet is that 5 GW of proprietary compute is worth more than theoretical exit freedom. Given the revenue trajectory, it seems to be working.

For companies building on Claude — whether they use Claude Code, the API, or integrations like Idlen to monetize AI apps with native ads — the message is positive: Anthropic now has the operational capacity to absorb their load and keep pushing the models forward. The compute war is far from over, but for once, a lab announces a deal that funds five years of roadmap in a single signature.

Sources: CNBC — Amazon invests up to $25 billion in Anthropic, Anthropic — Expand Amazon collaboration 5 GW, About Amazon — Strategic collaboration expansion, PYMNTS — Deepen ties investment hardware pact.

#amazon #anthropic #aws #trainium #claude #investment #ai-infrastructure #5-gigawatts