AI5 min readBy Paul Lefizelier

Anthropic taps every watt at SpaceX's Colossus 1: 222,000 GPUs, 300 MW and a doorway to orbital compute

Anthropic just signed a deal to use the entire AI capacity of SpaceX's Colossus 1 data center in Memphis — 222,000 NVIDIA GPUs, 300+ megawatts, and an explicit interest in gigawatts of compute in space. Claude rate limits already doubled.

Anthropic taps every watt at SpaceX's Colossus 1: 222,000 GPUs, 300 MW and a doorway to orbital compute

It is the most unexpected handshake of 2026. Anthropic — the lab behind Claude — has signed a deal with SpaceX to lock down every megawatt of AI capacity at the Colossus 1 data center in Memphis. The number that matters: 222,000 NVIDIA GPUs and 300+ megawatts of compute, all funnelled into a single customer. And Anthropic openly admitted it is "exploring" with SpaceX what comes next: gigawatts of compute capacity in space.

A 222,000-GPU lifeline for Claude

The headline metric is brutal. Colossus 1, originally built by xAI before SpaceX absorbed Elon Musk's AI lab earlier this year, packs over 222,000 NVIDIA accelerators across three generations: H100, H200, and the next-gen GB200 Grace Blackwell systems. Anthropic gets all of it. 300+ megawatts is enough to power roughly 250,000 American homes. It is now powering exactly one product family: Claude.

The deal is a response to a very specific problem. Anthropic's compute demand grew 80x in Q1 2026 alone, driven by enterprise adoption of Claude for coding agents, Claude Code, and the new Cowork desktop product. Internal capacity, even after the Google TPU and Amazon Trainium expansions, was no longer keeping up. The Memphis site solves it with a single signature.

The user-visible payoff arrived within 48 hours of the announcement: Claude Code's five-hour rate limits doubled for all paid tiers, peak-hour throttling was removed for Pro and Max subscribers, and Opus API rate limits jumped "considerably," per Anthropic. The compute hunger of vibe coding workflows is the immediate beneficiary.

Why Musk picked Anthropic over OpenAI

The political subtext is unmissable. Elon Musk is currently suing OpenAI. Anthropic is OpenAI's closest competitor. Renting Colossus 1 to Anthropic is, by any reading, a strategic shot.

Musk addressed this directly: "No one set off my evil detector." Translation — Anthropic's safety posture and constitutional AI approach passed an internal sniff test that, presumably, Sam Altman's company would not. Whether the framing is sincere or convenient, the financial logic is airtight: Colossus 1 costs roughly $300M/month to operate. xAI is keeping the bigger Colossus 2 for its own Grok training runs. Renting Colossus 1 to a single anchor tenant willing to pay top dollar is the obvious move.

For SpaceX, the deal also subsidizes a second ambition. Anthropic explicitly stated interest in working with SpaceX on orbital data centers — multiple gigawatts of compute parked above the atmosphere where solar is constant and cooling is free. It is not science fiction anymore. It is line one of the press release.

The compute landgrab that defines 2026

The Anthropic-SpaceX pact has to be read alongside everything else happening this quarter. Cerebras is filing an IPO at $26.6B valuation selling wafer-scale alternatives to NVIDIA. Moonshot AI just raised $2B at a $20B valuation to keep training Kimi K2.5 in China. NVIDIA's GTC keynote committed to gigawatt-class superclusters. The pattern is unmistakable: compute is the new oil, and the frontier labs are signing 10-year leases on every barrel they can find.

MetricDetail
Total GPUs222,000+ (H100, H200, GB200)
Power capacity300+ MW
Exclusive tenantAnthropic (full Colossus 1)
OperatorSpaceX (after xAI merger)
Immediate user benefitClaude Code limits 2x, Pro/Max peak throttling removed
Long-term ambitionMulti-GW orbital compute

What it means for the AI stack

Three immediate consequences for anyone building on top of Claude. First, rate limits are no longer the binding constraint for the next 12 months — meaning agentic workflows, long-running coding sessions, and high-throughput API consumers can scale without rationing. Second, Anthropic's unit economics improve: a single-tenant 300 MW lease, even at premium per-MW pricing, undercuts the cost of building greenfield data centers. Third, the orbital compute conversation is now real: if Anthropic and SpaceX commit even one gigawatt to space, the entire concept of "regional latency" gets rewritten.

There is also a quieter signal here for developers building chat-native applications. Compute scarcity has been pushing inference costs up for two years. The Colossus 1 deal is the first piece of evidence that, at the very top of the stack, the squeeze is starting to loosen. If you priced your AI product assuming 2025 inference economics, it is time to redo the spreadsheet.

Key takeaways

  • Anthropic locked 100% of Colossus 1's AI capacity — 222,000 NVIDIA GPUs across H100, H200, and GB200 generations
  • The site delivers 300+ MW in Memphis; SpaceX retains the larger Colossus 2 for xAI's own training
  • Claude Code rate limits doubled within 48 hours; Opus API limits raised "considerably"
  • Both companies disclosed serious work on multi-gigawatt orbital compute — solar-powered data centers in space
  • Anthropic's compute demand grew 80x in Q1 2026, driven by Claude Code and enterprise agent deployments

The compute landscape of late 2026 will not be written by who has the best model — it will be written by who has signed the longest lease. Anthropic just bought itself two years of breathing room. OpenAI's reaction will not be patient.

#anthropic #spacex #xai #colossus #claude #nvidia #gpu #data-center #memphis #elon-musk #compute