AI6 min readBy Paul Lefizelier

Ineffable Intelligence: David Silver Raises $1.1B Seed — Historic European Record to Build a 'Superlearner' Without Human Data

On April 27, 2026, ex-DeepMind David Silver (AlphaGo's father) announces the largest seed ever raised in Europe: $1.1B at a $5.1B valuation. Sequoia and Lightspeed co-lead, with Nvidia, Google, DST, Index, and the UK Sovereign AI Fund. The goal: an agent learning by pure reinforcement, with no human labels.

Ineffable Intelligence: David Silver Raises $1.1B Seed — Historic European Record to Build a 'Superlearner' Without Human Data

On April 27, 2026, Ineffable Intelligence emerged from stealth with a number that rewrites European venture history: $1.1 billion in seed funding, at a $5.1 billion valuation. It is the largest seed round ever raised on the continent, and it is helmed by David Silver, the former Google DeepMind researcher who co-created AlphaGo, AlphaZero and MuZero. The round is co-led by Sequoia Capital and Lightspeed Venture Partners, with Nvidia, Google, DST Global, Index Ventures and the UK Sovereign AI Fund participating. The company's thesis is as simple as it is radical: build a superlearner capable of discovering knowledge and skills without any human data, relying exclusively on reinforcement learning. If the bet pays off, this is a regime change as profound as the GPT-3 → GPT-4 transition.


A $1.1B seed: what it actually means

For scale:

RoundAmountValuationYear
Ineffable Intelligence (seed)$1.1B$5.1B2026
Mistral AI (seed)$113M$260M2023
Inflection AI (Series A)$225M~$1B2022
Anthropic (seed/Series A)$124M~$750M2021
Safe Superintelligence (seed)$1B~$5B2024

The only comparable is Safe Superintelligence by Ilya Sutskever — and even SSI did not raise $1.1B in a single round with that strategic concentration (Nvidia + Google + DST). The Ineffable round is also the largest European seed on record, eclipsing the historic rounds of Mistral AI on its way to a billion in revenue or the $890M of Yann LeCun at AMI Labs.

Who is David Silver and why are VCs paying this much

David Silver is no ordinary researcher. At DeepMind, he was the principal investigator of AlphaGo (2016, victory over Lee Sedol), AlphaZero (chess, shogi, Go learned in 24h with zero human data), MuZero (latent planning without environment model) and AlphaStar (StarCraft II at Grand Master level). He also co-authored the paper "Reward is Enough" (2021), which argues that a simple reward signal is sufficient to produce general intelligence — without any supervised pre-training.

That thesis is the Ineffable bet: break through the human-data quality ceiling by building an agent that supplies its own curriculum. If it works, we move past the current plateau of frontier LLMs (GPT-5.5, Claude Mythos, Gemini 3.1) which remain fundamentally bound to the quality of the human pre-training corpus.

The co-investors: a very legible strategic signal

  • Sequoia + Lightspeed: they had not co-led a seed at this size since OpenAI. This is a frontier-scale portfolio bet.
  • Nvidia: now a regular cap-table presence, like at Cursor's $50B, Thinking Machines Lab (Mira Murati), and now Ineffable. Jensen Huang is building a complete frontier portfolio.
  • Google: surprising, given Silver came from DeepMind. Sundar Pichai clearly chose not to block his ex-researcher and even to fund internal competition — exactly like the $40B Google investment in Anthropic.
  • DST Global: Yuri Milner usually plays growth rounds; here in seed = ultra-late growth signal pulled forward.
  • UK Sovereign AI Fund: confirms the British strategy post-Callosum $500M of taking equity positions in European frontier labs.

The technical bet: "reward is enough"

The classic LLM approach is fraying. Pre-training on web corpora is hitting the data wall: Common Crawl, scanned books, Reddit, GitHub. Quality degrades past 15-20T tokens — hence the push toward synthetic data (Anthropic, OpenAI, Microsoft MAI) and the race for ever more sophisticated RLHF + RLAIF.

The Silver/Ineffable approach: short-circuit pre-training. Instead of starting from a human corpus and fine-tuning with RL, you start directly with RL, in simulated environments rich enough to produce transferable skills. AlphaZero already proved it for games: 24 hours of self-play, no human data, super-human performance. The open question: can it scale to language and reasoning?

Several recent lines of work push this direction:

  • Self-play with critics (DeepMind 2024-2025)
  • Rich textual simulators (Adept, Reka, Inflection)
  • Meta-learning without labels (Meta HyperAgents)

Ineffable wants to build the full stack: simulation environments, agents, distributed RL scaling. The end goal, per Silver, is a system that invents its own curricula — going beyond what NeoCognition is planning with self-learning agents.

A seed = 4-5 years of frontier runway

At $1.1B and a planned headcount of 50-80 researchers, Ineffable has 4-5 years of runway at typical frontier-lab burn rates. That is more than Anthropic or OpenAI had in 2021-2022. It enables:

  • Building proprietary RL environments at scale
  • Buying Nvidia compute directly (Hopper 200 and Blackwell 300)
  • Aggressive recruiting from DeepMind, OpenAI, Anthropic, Google Brain
  • No IPO, no revenue pressure, pure R&D for 36 months

That is exactly the strategy that lifted Anthropic — see Anthropic refusing $800B valuation to stay product-focused.

Why Europe and not Silicon Valley

Silver is British, and Ineffable is headquartered in London (with R&D in Cambridge). Several reasons:

  1. DeepMind diaspora: the London hub concentrates roughly 600 senior ex-DeepMind staff post-2020, more accessible than Mountain View.
  2. Sovereign AI Fund: $500M deployed in 18 months, explicitly mandated to fund UK frontier labs (UK Callosum).
  3. Researcher cost: 40-50% lower than San Francisco / Bay Area on total comp.
  4. Regulation: the UK AI Safety Institute is more collaborative than the US FTC or the continental EU AI Act.

Combined with AMI Labs (Yann LeCun) in Paris and Mistral AI, Europe is closing part of the frontier gap. This is the first time since DeepMind 2014 that a European project is positioned as a direct competitor to OpenAI and Anthropic.

Open questions

  1. Product release: Ineffable refuses any timeline for now. No model announced for 2026, possibly H2 2027.
  2. Compute: $1.1B = ~150,000 H100-equivalents if 50% of the seed goes to compute. Comparable to OpenAI in 2023.
  3. Governance: Ineffable is incorporated as a public benefit corporation, like Anthropic and SSI — but the board has yet to be disclosed.
  4. Direct competition: SSI (Sutskever), DeepMind itself, and soon OpenAI on its own RL track. Ineffable will not enjoy a solitary window.

What it changes for the AI market

Short term, nothing: no product, no model, no competition on usage. Medium term (12-24 months), this is a price signal on the frontier seed. A $1.1B seed sets a new floor for upcoming Anthropic-class rounds. VCs will have to adjust:

  • Frontier seeds no longer happen below $100-200M
  • Seed valuations are up 50x in 4 years (from $100M to $5B)
  • Academic track record is alpha again — Silver, Sutskever, Murati, LeCun, Gomez

This is the return of founder-as-asset in an industry that thought it could move past it, and it is the confirmation that Silicon Valley no longer monopolizes the frontier — Europe has a seat at the table, as the historic AI week in late April made clear.

Bottom line

Ineffable Intelligence just redefined three things: the size of frontier seeds, Europe's place in the AGI race, and the viability of an RL-native approach without human data. David Silver puts his academic reputation on the line for the thesis that could break the current LLM ceiling. If it works, the horizon is 2028. If it does not, $1.1B bought Sequoia, Lightspeed and Nvidia a call option on the most technically credible AGI scenario today. Either way, the market is already a winner.

#ineffable-intelligence #david-silver #deepmind #sequoia #lightspeed #nvidia #google #reinforcement-learning #superintelligence #seed-record