AI5 min readBy Paul Lefizelier

Anthropic rents all of SpaceX's Colossus 1: 222,000 GPUs and 300 MW to fuel Claude

Anthropic signs a historic compute deal with SpaceX, taking over the entire Colossus 1 data center in Memphis. 222,000 NVIDIA GPUs, 300 MW of power, and Claude Code limits doubled within 48 hours.

Anthropic rents all of SpaceX's Colossus 1: 222,000 GPUs and 300 MW to fuel Claude

Anthropic just signed the most unexpected compute deal of the year. OpenAI's closest rival announced on May 6, 2026 a partnership with SpaceX to absorb the entire AI compute capacity of the Colossus 1 data center in Memphis — the former flagship xAI facility, now owned by SpaceX following the merger. On the menu: 222,000 NVIDIA GPUs, over 300 megawatts of power, and an immediate boost for Claude subscribers.

Even more striking: Anthropic also publicly expressed interest in co-developing with SpaceX multiple gigawatts of compute in orbit. Yes, in space.

The deal that shouldn't have happened

On paper, this agreement is almost absurd. Elon Musk — who merged xAI with SpaceX earlier this year — has been suing OpenAI for months. Dario Amodei, Anthropic's CEO, is a former OpenAI executive. And yet these two opposing ecosystems just signed an infrastructure partnership of historic scale.

Musk addressed it laconically: "No one set off my evil detector." Translation — he doesn't see Anthropic as the existential threat that OpenAI represents. For SpaceX, the math is mechanical: Colossus 1 has been underutilized since xAI shifted its training load to Colossus 2, the much larger twin data center. Rather than letting 220,000 H100/H200 GPUs idle, billing them at top rates is the obvious move.

For Anthropic, the math is even simpler. After 80x compute demand growth in Q1 2026, Claude's infrastructure had become the number one bottleneck. API rate limits, peak-hour throttling, queues on Claude Code — all of it pointed to the same problem: not enough silicon.

What changes for Claude users

The announcement translated into concrete effects in less than 48 hours.

ChangeBeforeAfter SpaceX deal
Claude Code 5h limit (paid tiers)CappedDoubled
Peak-hour limit reductionActive on Pro and MaxRemoved
Claude Opus API rate limitsStandard capSignificantly raised
Total Anthropic capacityRegularly saturated+300 MW headroom

For developers using Claude Code 2 daily, it concretely means the end of "you've reached your limit" messages interrupting vibe coding sessions. And for teams industrializing Claude via the API, it opens the door to production volumes that were previously reserved for OpenAI.

222,000 NVIDIA GPUs: the raw material

The hardware detail deserves a closer look. Colossus 1 houses:

  • 222,000+ NVIDIA GPUs, the majority being H100s and H200s
  • GB200 systems, NVIDIA's Blackwell generation accelerators
  • Over 300 megawatts of dedicated electrical power
  • A network interconnect designed for frontier-scale model training

For scale: 300 MW is the electrical consumption of roughly 250,000 American homes. It is also roughly double what the original supercluster used to train GPT-4 at OpenAI consumed. Anthropic gets, in a single signature, the equivalent of a year of infra roadmap.

And the hardware list is no accident: GB200s are the chips NVIDIA positioned at the heart of its GTC 2026 keynote as the new reference for large-scale inference. Anthropic isn't just buying compute — it is buying the latest GPU generation.

Space as the next frontier

The most futuristic element of the announcement almost slips past in the official releases: Anthropic expressed interest in co-developing several gigawatts of orbital compute with SpaceX.

The idea is not new. Space-based data centers have several theoretical advantages: constant solar power, passive vacuum cooling, independence from terrestrial land and power constraints. Until now, the problem was launch cost — and the mass production of Starship rockets is changing that calculus.

If the agreement materializes, it would be the first time a major AI player co-finances an orbital compute infrastructure. This is not a gimmick: by 2030, if terrestrial compute hits grid and environmental constraints, space becomes a credible option.

A new geopolitics of compute

This deal reshapes the global AI compute map.

PlayerMain compute strategyKey infra partner
OpenAIStargate ($500B) + Oracle + Microsoft AzureMicrosoft, Oracle
AnthropicAWS Trainium + Google TPU + SpaceX Colossus 1Amazon, Google, SpaceX
xAIColossus 2 (in-house)SpaceX (internal)
MetaOwned data centers + NVIDIA clustersNVIDIA
MistralEuropean sovereign cloudOVHcloud, Scaleway

Anthropic now stands as the only company resting on three strategic providers (Amazon, Google, SpaceX) without majority dependence on any single one. It is a posture reminiscent of systemic banks: diversify so you never become hostage to one supplier. While Mistral pushes toward $1B in revenue leaning on the European ecosystem, Anthropic is playing the multi-cloud card at planetary scale.

The irony of timing

It is worth highlighting: while Musk sues OpenAI and publicly trashes Sam Altman, his own company is renting its data center to OpenAI's most direct competitor. The personal rivalry between Musk and Altman is so deep that it created an unexpected alignment — Anthropic and xAI/SpaceX share a common enemy, which apparently is enough to sign multi-billion-dollar deals.

This "enemy of my enemy" logic is probably not sustainable long term. Anthropic and xAI will commercialize competing models (Claude Opus vs Grok 4), and conflicts of interest will emerge. But for the short term, both companies have what the other wants: Anthropic wants compute, SpaceX wants revenue to amortize Colossus 1.

Key takeaways

  • Anthropic rents the entire capacity of SpaceX's Colossus 1 data center in Memphis, Tennessee
  • The deal covers 222,000+ NVIDIA GPUs (H100, H200, GB200) and 300+ MW of electrical power
  • Immediate consequence: Claude Code limits doubled, peak-hour throttling removed, Opus API rate limits significantly raised
  • Anthropic expresses interest in co-developing several gigawatts of orbital compute with SpaceX
  • xAI keeps its larger Colossus 2 data center for its own training workloads
  • The agreement underscores Anthropic's multi-cloud strategy (AWS + Google + now SpaceX), while OpenAI bets on Stargate

The compute war of 2026 is no longer fought on model quality — Claude, GPT, Gemini, Grok have become interchangeable across most public benchmarks. It is fought on raw capacity to serve billions of requests per day without collapsing. And on that front, Anthropic just gained 12 months of roadmap with a single signature. For developers who depend on Claude Code for their vibe coding flows, the news is decisive: token scarcity is over. For OpenAI and Google, the signal is clear — Anthropic no longer has a technical ceiling. The next battle is fought on product, not on silicon.

#anthropic #claude #spacex #xai #colossus #memphis #nvidia #gpu #infrastructure #compute #elon-musk #dario-amodei