Deep Research Report

The Evolution of
Chess Engines

From Claude Shannon's 1950 paper to Stockfish 18 — how silicon learned to outplay humanity, and what it means for artificial intelligence.

76
Years of Progress
3,653
Peak ELO (Stockfish)
10120
Shannon Number
9.9B
Fishtest Games Played
Scroll

The Timeline

From a theoretical paper to superhuman intelligence in 76 years. Every milestone that mattered.

1950

Shannon's Paper

Claude Shannon publishes "Programming a Computer for Playing Chess," proposing two strategies: Type A (brute-force exhaustive search) and Type B (selective, human-like pruning). He estimates the game-tree complexity at ~10120 positions, proving brute-force solution is impossible. This paper becomes the foundational blueprint for all chess programming.

Foundational Theory
1951

Turing's Algorithm

Alan Turing creates the first chess-playing algorithm, though no computer is powerful enough to run it. He hand-simulates it on paper. Dietrich Prinz programs the Ferranti Mark 1 at Manchester to solve mate-in-two problems via retrograde analysis.

First Algorithm
1957

Bernstein's Engine

Alex Bernstein creates the first fully functional chess engine that can play a complete game from start to finish. Each move takes approximately 8 minutes to compute. The program uses Shannon's Type B selective search.

First Complete Engine
1966-67

Mac Hack VI & Soviet Programs

MIT's Richard Greenblatt writes Mac Hack VI, the first program to play in a human tournament (1966 Massachusetts Amateur, rated 1243). In 1967, it becomes the first program to beat a human in tournament play. Meanwhile, the Soviet ITEP Program defeats the American Kotok-McCarthy program 3-1 in a historic East-West computer chess match played over 9 months.

Tournament Play Begins
1970-79

Chess 4.x & Kaissa

Northwestern University's Chess 4.x series (Atkin, Slate, Gorlen) dominates the US Computer Chess Championships, winning 1970-1973 and 1975. Chess 4.0 (1973) pioneered full-width Type A search, abandoning selective search. In 1974, the Soviet program Kaissa becomes the first World Computer Chess Champion. By 1977, Chess 4.6 defeats US Champion GM Walter Browne in a simultaneous exhibition.

Competitive Era Begins
1972-83

Belle: First Chess Master

Ken Thompson and Joe Condon at Bell Labs build Belle, pioneering custom chess hardware with dedicated boards for move generation, position evaluation, and microcode alpha-beta pruning. Belle wins 5 ACM championships and the 1980 World Computer Chess Championship. In 1983, Belle achieves a USCF Master rating of 2250 — the first machine to reach that level. Its architecture directly inspires the designs that become Deep Blue.

Hardware Revolution
1985-89

ChipTest & Deep Thought

At Carnegie Mellon, Feng-hsiung Hsu, Thomas Anantharaman, and Murray Campbell build ChipTest (1986) and evolve it into Deep Thought (1988). Deep Thought becomes the first computer to defeat a grandmaster (Bent Larsen, 1988) and wins the 1989 World Computer Chess Championship 5-0. However, it falls to Kasparov in a two-game exhibition match, proving the world champion is still beyond reach.

Grandmaster Barrier Broken
1996-97

Deep Blue vs. Kasparov

IBM sponsors the team and builds Deep Blue: an RS/6000 SP supercomputer with 30 PowerPC 604e processors and 480 custom VLSI chess chips, evaluating 200 million positions/second. In 1996, Kasparov wins the match 4-2, but Deep Blue takes Game 1 — the first time a reigning world champion loses to a computer under tournament conditions. In the 1997 rematch, an upgraded Deep Blue wins 3.5-2.5, changing history forever.

World Champion Falls
2004-08

Stockfish Origins

Tord Romstad creates Glaurung, an open-source chess engine (2004). In 2008, Marco Costalba forks it and creates Stockfish, named because the code was "produced in Norway and cooked in Italy." The first version is released November 2, 2008. Stockfish adopts the alpha-beta search with hand-crafted evaluation — the gold standard of conventional engines.

Open Source Revolution
2013

Fishtest Distributed Testing

Stockfish launches Fishtest, a distributed testing framework where volunteers donate CPU time. Using sequential probability ratio testing across thousands of games, the community can rigorously test every proposed improvement. In its first year, Fishtest helps Stockfish gain 120 ELO points, propelling it to the top of all major rating lists. By 2026, Fishtest will accumulate 19,900+ years of CPU time and 9.9 billion games played.

Community-Driven Development
Dec 2017

AlphaZero Shakes the World

DeepMind unveils AlphaZero, which taught itself chess from scratch in 4 hours using self-play reinforcement learning — no opening book, no endgame tables, no human knowledge. Using MCTS + deep neural networks on 5,000 TPUs, it defeats Stockfish 8 with +28 -0 =72. Its "alien" playing style — sacrifice-heavy, positionally profound, and deeply creative — stuns the chess world. Published in Science (Dec 2018).

Paradigm Shift
Jan 2018

Leela Chess Zero (Lc0)

Gary Linscott (a Stockfish developer) launches Leela Chess Zero, an open-source reimplementation of AlphaZero's approach. The project uses distributed volunteer computing: thousands of contributors generate self-play games to train a shared neural network. By 2026, Lc0 has played over 2.5 billion self-play games. In 2022, Lc0 transitions from convolutional networks to a transformer architecture, gaining ~300 ELO in raw policy strength.

Open-Source AlphaZero
2018

NNUE Invented for Shogi

Yu Nasu publishes the NNUE (Efficiently Updatable Neural Network) architecture for the shogi engine YaneuraOu (developed by Motohiro Isozaki). The key insight: a neural network where inputs change minimally between evaluations (moving one piece only changes 2-3 entries), enabling CPU-efficient incremental computation at millions of evaluations per second. The name is a Japanese wordplay on "Nue," a mythical chimera.

NNUE Born
Sep 2020

Stockfish 12: The NNUE Merger

After Hisayori "Nodchip" Noda ports NNUE to Stockfish in early 2020, Stockfish 12 officially incorporates it. The result: +100 ELO in one month — roughly two years of normal improvement compressed into a single architectural change. Stockfish now combines its battle-tested alpha-beta search with a neural network evaluation, creating a hybrid that outperforms both pure classical and pure neural approaches.

Hybrid Revolution
Nov 2020

Komodo Dragon: Another Hybrid

Komodo releases Dragon 1.0, integrating NNUE with both alpha-beta and MCTS search modes. Designed by Larry Kaufman and Mark Lefler (continuing Don Dailey's legacy after his 2013 death), Dragon uses a neural network trained supervised by GM Kaufman plus reinforcement learning from billions of positions. It offers a unique hybrid that can switch between traditional and Monte Carlo search.

MCTS + NNUE Hybrid
2023-25

Stockfish Dominance

Stockfish 16 (+50 ELO over SF15), 16.1, 17 (+46 ELO over SF16), and 17.1 continue the relentless climb. The engine wins every TCEC and CCC superfinal from Season 20 onward. Neural network training shifts to automated, reproducible pipelines using over 100 billion positions of Lc0 evaluation data. CCRL ratings push past 3,700.

Unstoppable
Jan 2026

Stockfish 18

Released January 31, 2026. Introduces SFNNv10 architecture with "Threat Inputs" for natural threat perception, "Shared Memory" for multi-process efficiency, "Correction History" for dynamic evaluation adjustment, and removal of the 1024-thread limit. +46 ELO over Stockfish 17, winning 4x more game pairs than it loses. The strongest chess entity ever created.

Current State of the Art

ELO Progression

From 1,200 to 3,700+ in six decades. The chart below shows the approximate peak engine ELO by era, illustrating the exponential ascent of machine chess strength.

Peak Engine ELO by Era (approximate, normalized to FIDE scale)
Mac Hack VI (1967)
~1,240
Chess 4.6 (1977)
~1,950
Belle (1983)
~2,250
Deep Thought (1989)
~2,550
Deep Blue (1997)
~2,800
Rybka 3 (2008)
~3,100
Stockfish 5 (2014)
~3,300
Stockfish 8 (2017)
~3,400
AlphaZero (2017)
~3,500+
Stockfish 12 NNUE (2020)
~3,500
Stockfish 15 (2022)
~3,550
Stockfish 16 (2023)
~3,600
Stockfish 17 (2024)
~3,650
Stockfish 18 (2026)
~3,700
Lc0 (2025, CCRL)
~3,713

Note: ELO values are approximate and vary by rating list (CCRL, CEGT, TCEC, etc.) and time control. Early engine ratings use USCF scale; modern ratings are from CCRL 40/15. Direct comparisons across eras are imprecise due to differing hardware and opponents.

The AlphaZero Revolution

In December 2017, DeepMind changed everything. A program that knew nothing about chess — not even the value of a queen — taught itself to play at a superhuman level in four hours.

"It's like chess from another dimension."
Demis Hassabis, CEO of DeepMind
4h
Training Time
Zero to superhuman
44M
Self-Play Games
During training
80K
Positions/sec
vs Stockfish's 70M
+28
Wins vs Stockfish
0 losses in 100 games
5,000
Gen-1 TPUs
For self-play generation
64
Gen-2 TPUs
For neural net training

The NNUE Revolution

How a neural network architecture from Japanese chess (shogi) transformed Stockfish overnight and created the dominant hybrid approach.

"NNUE gave Stockfish the equivalent of two years of improvement in a single month."
Stockfish Development Team, September 2020

Technical Comparison

Three paradigms, one game. How the major approaches to chess engine design differ in philosophy and implementation.

Alpha-Beta + NNUE
Stockfish, Komodo Dragon
Search Alpha-Beta
Evaluation NNUE (neural)
Positions/sec 50-100M
Hardware CPU (multi-thread)
Search Depth 30-50+ plies
Style Tactical precision
Learning Supervised + RL
MCTS + Neural Net
AlphaZero, Leela Chess Zero
Search MCTS
Evaluation Deep NN (value head)
Positions/sec 40-80K
Hardware GPU / TPU
Search Depth Variable (selective)
Style Positional / creative
Learning Pure self-play RL
Classical (Historical)
Deep Blue, Pre-NNUE Stockfish
Search Alpha-Beta
Evaluation Hand-crafted heuristics
Positions/sec 10-200M
Hardware CPU / custom ASIC
Search Depth 6-20+ plies
Style Materialistic / tactical
Learning Human-engineered

The Arena: TCEC & CCC

The Top Chess Engine Championship (TCEC) and Chess.com Computer Chess Championship (CCC) are the premier proving grounds where engines battle under controlled conditions.

TCEC Superfinal Winners (Recent Seasons)

Season 14 (2018)
Stockfish
vs Komodo
Season 15 (2019)
Leela Chess Zero
53.5 - 46.5 vs SF
Season 16 (2019)
Stockfish
vs Leela
Season 17 (2020)
Leela Chess Zero
52.5 - 47.5 vs SF
Season 18 (2020)
Stockfish
53.5 - 46.5 vs Lc0
Season 19 (2020)
Stockfish NNUE
54.5 - 45.5 vs Lc0
Season 20 (2021)
Stockfish
+14 -8 =78 vs Lc0
Season 21 (2021)
Stockfish
+19 -7 =74 vs Lc0
Season 23 (2022)
Stockfish
+27 -10 =63 vs Lc0
Season 25 (2024)
Stockfish
+27 -23 =50 vs Lc0
Season 26 (2024)
Stockfish
+31 -17 =52 vs Lc0
Season 28 (2026)
Stockfish
+36 -21 =43 vs Lc0

After Lc0's dual victories in Seasons 15 and 17, Stockfish has won every subsequent TCEC Superfinal — a streak of 10+ consecutive championships. The NNUE integration in Season 19 marked the turning point: Stockfish absorbed the neural network revolution into its own framework and came back stronger than ever. The competitive landscape is widening, though, with engines like Obsidian (ELO ~3,686), Caissa (3,641), Komodo Dragon (3,634), and Berserk (3,615) all reaching top-tier strength.

The Fat Fritz Controversy

Open source vs. commercial interests: a cautionary tale about what happens when companies try to monetize community work.

Hardware: The Compute Question

From room-sized supercomputers to smartphone apps. How much compute does a top chess engine actually need?

CPU Engines

Stockfish, Komodo Dragon

Optimal hardware: High-core-count x86 CPUs with AVX-512 or AVX-VNNI support. Stockfish 18 removes the 1024-thread limit.

Consumer level: A modern 16-core AMD Ryzen or Intel Core i9 runs Stockfish at full strength (~3,600+ ELO). Even a laptop with 4 cores plays at superhuman level.

Competition level: TCEC uses high-end server hardware (128+ threads). Fishtest volunteers collectively provide thousands of CPU cores.

NNUE efficiency: Uses int8/int16 SIMD instructions, requiring zero GPU. The neural network weights are small enough to share between processes via Stockfish 18's "Shared Memory" feature.

vs

GPU Engines

Leela Chess Zero

Optimal hardware: High-end NVIDIA GPUs (RTX 4090, A100, H100) with CUDA/cuDNN support. Performance scales roughly linearly with GPU compute.

Consumer level: An RTX 3080 or better provides competitive play (~3,500+ ELO). Older or smaller GPUs still work but with significantly reduced strength.

Competition level: TCEC provides Lc0 with an NVIDIA A100 80GB. Google's AlphaZero used 4 TPUs for inference (5,000 TPUs for training).

Transformer networks: Lc0's BT4 transformer model benefits from larger GPU memory and higher throughput, scaling better than older convolutional architectures.

Is Chess Solved?

With engines reaching ELO 3,700+, are we approaching perfect play? The answer is more complex than you'd think.

10120
Shannon Number
Possible game variations
1050
Legal Positions
Estimated by Allis
1080
Atoms in Universe
For comparison
7
Pieces Solved
Endgame tablebases

The Current Landscape

As of early 2026, the top engines and their approximate CCRL 40/15 ratings.

Rank Engine Type ELO (CCRL) Strength
1 Stockfish 18 AB + NNUE ~3,759
2 Leela Chess Zero MCTS + NN ~3,713
3 Obsidian AB + NNUE ~3,686
4 Caissa AB + NNUE ~3,641
5 Komodo Dragon AB/MCTS + NNUE ~3,634
6 PlentyChess AB + NNUE ~3,623
7 Berserk AB + NNUE ~3,615

Ratings from CCRL 40/15 rating list (Nov 2025). Exact values vary by time control and rating list. Stockfish has won all TCEC and CCC main events since 2020.

What It All Means

1
Algorithms Beat Hardware

Deep Blue needed a room-sized supercomputer to play at ~2,800 ELO. Today's Stockfish achieves ~3,700+ on a laptop. The million-fold improvement came primarily from algorithmic advances (alpha-beta refinements, NNUE evaluation, search heuristics), not raw compute.

2
The Hybrid Won

Neither pure brute-force search nor pure neural networks won the war. Stockfish's dominance comes from combining the best of both: alpha-beta's tactical depth with NNUE's positional intuition. Even Lc0's self-play data now feeds Stockfish's training. The paradigms have merged.

3
Open Source Wins

Stockfish (open-source, community-developed) surpassed every commercial engine. 900+ contributors, 19,900+ years of donated CPU time, and a rigorous distributed testing framework proved that decentralized development can beat corporate R&D — even IBM and Google.

4
AlphaZero Changed the Conversation

Even though AlphaZero itself was never publicly released and Stockfish has since surpassed it, its impact was permanent. It proved that reinforcement learning from scratch could master chess. It forced the entire field to adopt neural networks. And its "alien" games showed that chess still has undiscovered depths.

Sources

History & Timeline
Deep Blue
Stockfish
AlphaZero
Leela Chess Zero
NNUE
Competitions
Controversies
Technical & Theory