♔ ♦ ♚

The Grandmaster's Twilight

On what it means that a machine plays our greatest thinking game better than any human who has ever lived — and what that silence tells us about intelligence, meaning, and ourselves

Research Compilation — March 2026

Scroll to begin

“Deep Blue was intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.”

— Garry Kasparov, Deep Thinking (2017)
Section I

The Canary in the Coal Mine

For nearly half a millennium, chess has served as civilization's shorthand for supreme intelligence. When we say someone “plays chess” with their rivals, we mean they think several moves ahead. When we call someone a “grandmaster” of their domain, we invoke the highest form of intellectual mastery. Einstein’s opponents compared losing to him to “playing against the theory of relativity itself.” Bobby Fischer declared: “I consider myself to be a genius who just happens to play chess.”

Since the mid-1960s, AI researchers have referred to chess as the “drosophila” of artificial intelligence—the fruit fly of the mind. As historian Nathan Ensmenger documented, chess was chosen because it was “well-known and popular, and was also generally recognized as a complex, creative game that required strategy and planning; thus, the ability to play good chess was widely considered to be a strong indicator of more general intelligence.”

Herbert Simon, co-creator of the General Problem Solver, predicted in 1957 that within ten years a computer would be world chess champion. He and Allen Newell believed that if machines could master chess, they could eventually handle “the range to which the human mind has been applied.” Simon was off by four decades. But the prediction held a deeper truth: chess would indeed become the proving ground where humanity first confronted its intellectual replaceability.

The question was never really about chess. It was about what chess represented—a proxy war for the nature of thought itself.

Core Tension

Chess was chosen as the benchmark for artificial intelligence precisely because it embodied what humans valued most about their own minds: strategic depth, creativity, pattern recognition, and the ability to think ahead. The irony is that when machines finally conquered the game, they did so through brute computation—revealing that what we thought of as the pinnacle of human cognition could be replicated, then surpassed, by a fundamentally alien process.

The Drosophila Problem: What Chess Didn't Teach AI

Ensmenger's research revealed an uncomfortable irony: “unlike drosophila, and despite its apparent productivity as an experimental technology, computer chess ultimately produced little in terms of fundamental theoretical insights.” The AI community chose chess as its model organism for intelligence, but the methods that eventually conquered the game—raw computational power and evaluation functions—told us almost nothing about how human minds actually work. Chess was not the drosophila of AI; it was more like a hall of mirrors, reflecting back our assumptions about intelligence rather than illuminating its actual nature.

Section II

The Grandmaster's Defeat

On May 11, 1997, Garry Kasparov—widely considered the greatest chess player in human history—sat hunched over a chessboard in New York City, visibly frustrated, fidgeting and shaking his head in disbelief. Across from him, IBM’s Deep Blue completed its victory in game six of their rematch. The machine had won. Newsweek ran the story under the headline “The Brain’s Last Stand.”

The framing was apocalyptic. Steven Levy’s Newsweek cover story suggested that “the outcome could be seen as an early indication of how well our species might maintain its identity, let alone its superiority, in the years and centuries to come.” Cable news pundits cast it as “humanity versus AI.” Kasparov himself described the weight of the moment: “It was very emotional. You are defending humanity.”

I knew it was over and I wanted to finish as soon as possible.

— Garry Kasparov, on game six of the 1997 match

The psychological warfare was uniquely devastating. Kasparov had spent his career reading opponents—studying their body language, exploiting their fears, sensing when they were uncertain. Deep Blue offered nothing to read. As one reviewer noted, the machine’s “implacable blankness” prevented Kasparov from employing the psychological dimension that had always been central to his dominance. He was not just outplayed; he was psychologically disoriented by an entity that had no psychology to disrupt.

The Arc of Acceptance

1997

The Defeat

Kasparov is bitter. He accuses IBM of cheating, demands a rematch, questions whether human intervention influenced critical moves. The loss is “physically painful.”

1998

Advanced Chess

Kasparov invents “advanced chess”—human-computer teams competing against each other. A radical pivot from resentment toward collaboration.

2010

The New York Review of Books

Kasparov publishes a landmark essay reflecting on the centaur chess phenomenon, observing that “humans today are starting to play chess more like computers.”

2017

Deep Thinking

Kasparov publishes his philosophical reckoning. The man who once raged against the machine becomes one of AI’s most articulate advocates.

2021

Harvard Business Review

Kasparov writes that AI should “augment human intelligence, not replace it,” advocating for “Augmented Intelligence.”

I think it’s more like a blessing. I was part of something unique.

— Garry Kasparov, 2016 interview, on his match with Deep Blue

Kasparov’s psychological evolution—from bitter opponent to eloquent advocate—may be the single most instructive human response to AI displacement in recorded history. He did not merely accept the machine’s superiority; he transmuted that acceptance into a philosophy of human-machine partnership that now informs the global AI discourse.

Section III

Redefining Intelligence

For centuries, chess served as the operational definition of intelligence. If you could play chess well, you were smart. If you could play at grandmaster level, you were among the most intelligent humans alive. This equation was so deeply embedded in culture that “grandmaster” became a generic term for supreme expertise in any field.

Then a machine that evaluated 200 million positions per second—using what Claude Shannon had called the “Type A” brute-force approach—proved that the outputs of intelligence could be replicated without any of its internal character. Deep Blue did not think. It did not intuit. It did not feel the aesthetic pull of a beautiful combination. It simply counted.

Deep Blue was as intelligent as your alarm clock. It was not about intelligence, it was massive brute force.

— Garry Kasparov

This forced a painful philosophical reckoning. If the outputs of intelligence can be separated from the experience of intelligence, then what exactly had we been measuring when we called chess players intelligent? The Los Angeles Review of Books captured this dissonance in a review of Kasparov’s Deep Thinking: the early AI researchers operated under a “flawed circular logic: if brains compute thoughts like computers, then programming computer codes could replicate cognition.” They were wrong about the mechanism, but the outputs matched anyway.

Intelligence was never about chess specifically—it was about the general capabilities that chess demanded: planning, pattern recognition, strategic thinking, creativity under constraint. That a machine can replicate the outputs using a completely different process doesn’t diminish human intelligence; it reveals that we were measuring the wrong thing. Intelligence is broader than any single benchmark.

If every “benchmark” of intelligence falls to computation one by one—chess, Go, protein folding, mathematical proof, creative writing—at what point does the redefinition become an act of defensive goalpost-moving? We keep redefining intelligence as “whatever machines can’t do yet,” but the list grows shorter every year.

Kasparov himself grappled with this tension. In Deep Thinking, he writes: “The benefits of computer processing are easy to measure in speed and output, while benefits of human thought are often immeasurable.” And here lies the existential risk: “Given contemporary society’s worship of the measurable and suspicion of the ineffable, our own intelligence would seem to be at a disadvantage.”

Chess Mastery as Embodied Intelligence

Neurological research has revealed something surprising about how grandmasters actually think. Rather than the methodical, logical analysis that the “chess as pure intellect” stereotype suggests, move generation in chess masters involves “more visuospatial brain activity than calculation.” Grandmasters read positions the way a native speaker reads a sentence—holistically, intuitively, with a fluency that defies step-by-step analysis. Kasparov challenges the stereotype of chess as pure cerebral calculation, noting that the game demands “stamina, resilience, and an aptitude for psychological warfare”—embodied qualities that no machine possesses.

This suggests that chess mastery was always a more human, more physical, more intuitive achievement than the “chess as logic” metaphor allowed. The machine didn’t replicate human chess intelligence; it achieved similar outputs through an entirely different architecture. It’s as if we discovered flight by building rocket engines rather than wings.

Section IV

The Philosophers Respond

The conquest of chess by machines forced some of the twentieth century’s most important thinkers to revise their understanding of mind, intelligence, and the prospects of artificial thought.

Hubert Dreyfus
The Heidegger-wielding critic of AI who insisted machines could never play decent chess—lost to a chess program himself, then watched computers conquer the game entirely. Yet his deeper critique of GOFAI proved prescient.
Daniel Dennett
Proposed the “intentional stance”—that we can meaningfully say a chess computer “believes” a queen capture provides advantage, and this interpretation is valid regardless of mechanism.
John Searle
His Chinese Room argument insists that Stockfish, like the man in the room with Chinese symbols, manipulates syntax without semantics—it produces chess moves without understanding chess.
Douglas Hofstadter
In GEB, speculated that no computer would beat any human chess champion. Later admitted he was “dead wrong.” Now says he is “very frightened” by AI and that humanity is “collectively playing with fire.”

Dreyfus: Right and Wrong

Hubert Dreyfus spent decades arguing from Heidegger and Merleau-Ponty that GOFAI—Good Old-Fashioned AI based on symbolic reasoning and rule-following—would inevitably fail. His 1965 paper “Alchemy and AI” was a philosophical hand grenade thrown into the MIT AI Lab. Seymour Papert arranged a chess match between Dreyfus and the program Mac Hack; Dreyfus lost, to the AI community’s delight.

But here is the irony that history has not fully appreciated: Dreyfus’s specific prediction was wrong—machines did play excellent chess—but his deeper critique was vindicated. GOFAI did fail. Deep Blue conquered chess not through the symbolic reasoning Dreyfus attacked, but through brute computation. And the subsequent revolution in AI—neural networks, deep learning, statistical models—represents exactly the kind of approach Dreyfus was gesturing toward when he invoked embodied, non-representational cognition. As he put it in 2007: “I figure I won and it’s over—they’ve given up.”

Dennett: The Intentional Stance

Daniel Dennett took a radically different approach. For Dennett, the question “does a chess computer really think?” is less important than whether treating it as if it thinks produces useful predictions. His concept of the “intentional stance” treats a chess-playing computer as an intentional system—one that “knows” the rules, has “beliefs” about positions, and “wants” to win.

If something successfully performs a task like chess, objections about different implementation methods are irrelevant to whether the performance constitutes genuine execution of that task.

— Dennett’s functionalist position, paraphrased

Dennett’s view is pragmatic and deflationary: computers don’t need to “have minds” in the sense of having mental state parts. They need only be such that it is apt for us to understand them by taking the intentional stance. The chess computer is a “real pattern”—its intentional description compresses information about its behavior more efficiently than any mechanical description could.

Hofstadter: The Terrified Visionary

Douglas Hofstadter’s journey may be the most emotionally honest of any major thinker. Near the end of Gödel, Escher, Bach, he listed “10 Questions and Speculations” about AI, including “Will there be chess programs that can beat anyone?” His answer: “No.”

At a 2014 Google meeting, he admitted he had been “dead wrong.” But chess was only the beginning of his disillusionment. After hearing computer-generated music in the style of Chopin, Hofstadter reported feeling “deeply troubled” about what he called “one of the most important parts of GEB.” By 2025, he described himself as “very frightened” by AI, comparing humanity to a driver accelerating into increasingly dense fog—it’s “fun” until “the fatal crash takes place.”

Hofstadter’s Shifting Timeline for the Singularity

In 1993, Hofstadter speculated about whether our “evolutionary successors” would count as “us”—placing the question in the year 2493. He later revised this to 2093. By 2025, he was asking whether the answer might be “2033?” Each revision compresses the timeline by an order of magnitude, reflecting not just technological acceleration but the psychological erosion of a thinker who built his life’s work on the premise that consciousness was too strange, too loopy, too human to be replicated by machines—and who now watches that premise crumble.

Section V

Does the Machine Understand?

John Searle’s Chinese Room thought experiment, first published in 1980, remains the sharpest challenge to claims of machine intelligence. Imagine a person locked in a room with a rulebook for manipulating Chinese characters. People outside slip Chinese questions under the door; the person inside follows the rules, produces Chinese answers, and slides them back. To the outside world, the room “speaks Chinese.” But the person inside understands nothing.

Apply this to chess. Stockfish evaluates positions using an evaluation function and alpha-beta search. It produces moves that would be described as “brilliant,” “creative,” or “deeply strategic” if a human made them. But Searle’s argument insists that the computer “merely uses syntactic rules to manipulate symbol strings, but has no understanding of meaning or semantics.” Stockfish does not understand that a bishop controls a diagonal. It has no concept of attack, defense, beauty, or sacrifice. It follows rules and produces outputs.

Playing chess “intelligently,” making a “clever move,” or “understanding” a position are claims of Strong AI—that the computer has genuine mental states. But the Chinese Room shows this is impossible through computation alone. The chess engine is a sophisticated abacus, not a mind.

Perhaps the individual components don’t understand, but the system as a whole does. When you ask whether a human “understands” chess, you’re asking about the whole person—neurons, training, experience, culture—not any individual neuron. Why shouldn’t the same apply to a sufficiently complex system of silicon and code?

Dennett offers a resolution through the intentional stance: the question “does it really understand?” may be the wrong question. If treating a chess engine as an intentional system that “believes” and “wants” produces reliable predictions of its behavior, then this description captures something real about the system. Not consciousness—but a “real pattern” that exists in the world.

The human mind isn’t a computer; it cannot progress in an orderly fashion down a list of candidate moves. Even at the highest level, our “wanderings” in analysis sometimes produce inspiration and paradoxical moves beyond mechanical evaluation.

— Garry Kasparov, Deep Thinking

Perhaps the most honest conclusion is that we are confronting a new category of entity—something that is neither conscious in the way we are, nor merely mechanical in the way a clock is. The chess engine occupies an uncomfortable middle ground: too competent to dismiss, too alien to embrace.

Section VI

The Degradation of Expertise

When Magnus Carlsen, the highest-rated chess player in history, admits he cannot beat the chess engine on his smartphone, something shifts in the meaning of mastery itself. Today, any free Android app running Stockfish will crush any grandmaster alive. The greatest human achievement in the game’s 1,500-year history is objectively inferior to software running on a device that costs less than dinner.

This creates what researchers have called the “AI deskilling paradox.” Studies show that “even well-trained experts may gradually lose their task-based cognitive skills when AI routinely aids performance at a high level.” The more the tool does, the less capable the human becomes without it. A 2025 study in The Lancet found that endoscopists who routinely used AI assistance performed measurably worse when the technology was removed—their detection rates dropped from 28.4% to 22.4%.

Knowledge work has been the engine of self-worth and social mobility—where intelligence found validation and contribution met compensation. To lose that to a machine is not just to lose a job but a way of being in the world.

— Daniel Miessler, on AI and knowledge work

Chess illustrates this degradation precisely. Carlsen has described how AI-assisted opening preparation has fundamentally changed elite competition, creating “a kind of trench warfare” where the first fifteen moves may be prepared at home with computer assistance. He declined to defend his World Championship title partly due to “a lack of motivation to play classical chess, because of the dominance of opening preparation.” Sometimes he deliberately plays a second-best move simply to force opponents out of their computer-aided preparation—a grandmaster choosing suboptimal play to restore the human element.

The Paradox of Help and Harm

Research from the ACM describes the “AI Deskilling Paradox”: AI makes task completion easier and faster, but creates a paradox of help and harm. “Because AI tools are likely to enhance performance and make the task feel easier, learners may be less able to judge the true status of their skills and experts may be unaware of their deteriorating skills.” People with little knowledge or training can perform the same work with AI, driving down wages and eroding the economic value of expertise. The machine that makes everyone capable makes no one irreplaceable.

But chess also demonstrates that expertise can find new forms of value. Grandmasters have repositioned from “being the best calculator” to “being the most interesting decision-maker under constraints”—becoming curators, explainers, performers, coaches, and content creators. The expertise didn’t vanish; it migrated.

Section VII

The Centaur's Bargain

In 2005, a “freestyle” chess tournament on Playchess.com allowed any combination of humans and computers to compete. Grandmasters entered with powerful engines. Dedicated computer systems entered alone. The winners were neither. Two amateur American chess players, using three relatively modest computers, defeated both grandmaster-computer teams and standalone supercomputers.

Kasparov drew a conclusion that has echoed through every subsequent discussion of human-AI collaboration:

Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

— Garry Kasparov, “The Chess Master and the Computer,” NYRB (2010)

Tyler Cowen seized on this insight as the central metaphor of his book Average Is Over (2013). A “centaur” chess player marries human intuition, creativity, and empathy with a computer’s brute-force calculation. Cowen argues that the skills that matter in this new world are “the ability to trust the machine, to absorb large amounts of information, and to quickly evaluate the various judgements from different chess programs.” This becomes his prediction for the entire economy: only the 10 to 15 percent of workers whose skills complement intelligent machines will find that happy niche in a polarized labor market.

The Centaur Paradox

But the centaur model carries a hidden inversion. As The New Atlantis observed, in advanced chess “the humans are trying to follow the thought process of the computers.” What begins as human-machine collaboration can become human subordination—the human providing not guidance but permission, not creativity but oversight. The centaur may be half-human, but which half holds the reins?

AI-based machines are fast, more accurate, and consistently rational, but they aren’t intuitive, emotional, or culturally sensitive. Humans possess authentic intelligence—the ability to imagine, anticipate, feel, and judge changing situations.

— Garry Kasparov, Harvard Business Review (2021)

Kasparov now advocates for what he calls “Augmented Intelligence”—combining artificial intelligence’s computational power with authentic intelligence’s creativity and judgment. The chess lesson, he argues, is not that machines will replace us, but that the process of collaboration matters more than the raw power of either partner.

Section VIII

The Popularity Paradox

When Deep Blue defeated Kasparov, experts predicted the death of chess. If a machine could play the game better than any human, why would anyone watch humans play? Why would anyone bother to learn? The game, they said, was solved. It was over.

They could not have been more wrong.

Today, approximately 605 million people play chess regularly. Chess.com registers over 30 million games per day—up from roughly one million in 2015. During the pandemic, the platform added 100 million new users. Magnus Carlsen streams to 400,000 viewers. The game generates 550 million hours of Twitch viewing annually. Netflix’s The Queen’s Gambit reached 62 million viewers in 2020.

Chess did not die of perfection. It went viral.

The Numbers

Chess is more popular now than at any point in its 1,500-year history—and this explosion occurred after machines rendered human play objectively inferior. The phone in every player’s pocket is stronger than the Deep Blue that defeated Kasparov. And yet they play.

The paradox demands explanation. Several forces converge:

Democratization Through Technology

The same technology that surpassed human play also made the game radically accessible. Free chess engines provide instant analysis. Online platforms allow instant games against opponents worldwide. AI-powered tutorials teach pattern recognition that once required years of study with a human coach. Technology helped to “democratize chess without compromising its inherent intellectual attractiveness, combined with the social qualities associated with the practice of the ancient game.”

The Spectacle of Human Struggle

Modern chess audiences care less about optimal play than about the dramatization of human effort. The real-time awareness of the difference between the “correct” move (as judged by engines) and the actual move chosen by a human player creates a new kind of drama—the spectacle of human limitation in an age of machine perfection. It’s not chess expertise that draws viewers; it’s human personality, narrative, and the inherent tension of watching someone struggle against problems that have, in some cosmic sense, already been solved.

The Streaming Revolution

Quirky grandmasters on Twitch and YouTube have made chess entertainment in a way that traditional tournament chess never was. The game’s community dimension—friendship, rivalry, banter, shared struggle—matters more to most players than the theoretical ceiling of human performance. People play chess for the same reasons they play pickup basketball despite the existence of the NBA: the joy is in the playing, not in being the best.

The lesson here is counterintuitive and potentially consoling for every knowledge worker facing AI displacement: being surpassed does not mean being abandoned. Knowing that “machines do your job better than you could ever hope for is not the end of the world.” The market for human chess didn’t contract when machines got better; it exploded. What contracted was the illusion that chess’s value depended on humans being the best at it.

• • •
Section IX

Beauty After Supremacy

José Raúl Capablanca, the third World Chess Champion, articulated a vision for the game that now reads like a prophecy and a warning: “Chess can never reach its height by following in the path of science. Let us, therefore, make a new effort and with the help of our imagination turn the struggle of technique into a battle of ideas.”

This “battle of ideas” is precisely what AI threatens. As The New Atlantis argued, Capablanca’s vision “fundamentally depends on the possibility of human error. When we play chess, we hatch grand plans, take risks, fall into traps, succumb to pressure, psych out opponents, and make bold sacrifices—all without knowing whether any of it will pay off.”

Computers have flattened chess, increasing pure understanding of the game at the expense of creativity, mystery, and dynamism.

— “Can Chess Survive Artificial Intelligence?,” The New Atlantis

The aesthetic argument is perhaps the most human response to machine supremacy. It concedes the contest of power but refuses the contest of meaning. Humans play chess not to compute the objectively best move, but to experience the beauty of a well-conceived plan, the agony of a blunder, the electric thrill of a sacrifice that may or may not work. None of these experiences require being the best. They require only being present, being fallible, being human.

Judit Polgár, the strongest female chess player in history, captured this tension precisely: “With the computer, many times it points out that maybe I have creative ideas, but not necessarily good ones.” Mechanical correctness conflicts with human imagination. The question is whether we can value the imagination even when the machine proves it wrong.

Error as Essence

Here is the deepest insight from the chess-AI encounter: human error is not a flaw to be eliminated but a constitutive element of what makes human endeavor meaningful. Strip away the possibility of failure, and you strip away the drama, the courage, the beauty. A chess game between two perfect engines is theoretically interesting but aesthetically dead—a proof, not a story. Human chess endures because it is a story, and stories require characters who can lose.

Section X

The Alien at the Board

In December 2017, DeepMind’s AlphaZero taught itself chess from scratch—given nothing but the rules—and within four hours achieved superhuman performance. It then defeated Stockfish, the world’s strongest traditional chess engine, in a 100-game match without losing a single game.

What stunned the chess world was not the result but the style.

It doesn’t play like a human, and it doesn’t play like a program. It plays in a third, almost alien, way.

— Demis Hassabis, CEO of DeepMind

I always wondered how it would be if a superior species landed on Earth and showed us how they play chess. I feel now I know.

— Peter Heine Nielsen, Grandmaster and coach of two World Champions

AlphaZero offered counterintuitive sacrifices—giving up queens and bishops to exploit subtle positional advantages that no human or traditional engine would consider. It played with what observers could only describe as creativity, intuition, and beauty. Hassabis called it “chess from another dimension.”

This created a philosophical vertigo that Deep Blue never had. Deep Blue won through brute force—it was easy to dismiss as an expensive alarm clock. AlphaZero won through something that looked like understanding. Its neural network had, through self-play, developed what appeared to be chess wisdom—an emergent form of intelligence that was neither human nor mechanical but genuinely novel.

If Deep Blue posed the question “can a machine beat a human at chess?”, AlphaZero posed the far more unsettling question: “can a machine play chess more beautifully than a human?”

AlphaZero’s Influence on Human Chess

Rather than replacing human creativity, AlphaZero’s alien style has actually influenced human grandmasters. Players now study AlphaZero’s games to discover positional ideas that centuries of human chess theory had overlooked. In this sense, the machine serves as the telescope Kasparov hoped for—not a replacement for the human eye, but an instrument that reveals what the eye alone could never see. As Kasparov noted with characteristic precision: “If you program a machine, you know what it’s capable of. If the machine is programming itself, who knows what it might do?”

Section XI

What Are Humans For?

If humans cannot beat machines at the game that defined human intelligence for half a millennium, then what are humans for? This is the existential question that chess poses—not as a game, but as a parable for the age of AI.

The existentialist tradition offers one answer. Sartre argued that humans are “condemned to be free”—that meaning is not found but created, not given but chosen. AI does not replace the human condition; it magnifies it. As one philosopher noted: “The machine may think, but only the human can question the meaning of thought.”

The human ability to ask new questions, to generate hypotheses, and to identify and find novelty is unique and not programmable. No statistical procedure allows one to somehow see a mundane, taken-for-granted observation in a radically different and new way.

— Dr. Teppo Felin, Utah State University

Kasparov himself offers a more practical optimism. In Deep Thinking, he writes that we must be optimistic “because it’s self-fulfilling: if we are creative and ambitious, intelligent machines will liberate us and be as profound a boon as electricity. But if we are fearful, we could be overwhelmed by automation and inequality.”

AI is a telescope for the mind. The telescope did not replace astronomers—it made their work infinitely richer. The printing press did not replace authors—it amplified authorship. AI tools function as cognitive amplifiers, and amplifiers magnify the quality of their input. Expertise becomes more valuable, not less, because AI makes experts capable of more and better work.

Every previous technology augmented human physical capability while leaving cognitive sovereignty intact. Tractors replaced muscles; AI replaces judgment. When the tool operates in the same domain as the mind itself, the augmentation-replacement distinction collapses. There is a critical difference between a telescope (which extends vision) and an astronomer-bot (which renders the astronomer optional).

Chess offers an empirical answer to the theoretical debate. The game did not die when machines surpassed humans. Grandmasters did not become irrelevant. Instead, they found new roles: teacher, entertainer, narrator, curator of beauty. The expertise migrated from “best player” to “most interesting player.” Humans repositioned from “being the best calculator” to “being the most interesting decision-maker under constraints.”

Perhaps this is the answer. Humans are not for computing. Humans are for meaning. And meaning—the act of caring, of choosing, of struggling toward something that matters—is not a computational problem. It is the one domain where the question of machine supremacy does not even apply.

Section XII

The Wisdom of Process

Both Buddhist and Stoic traditions counsel a radical reorientation: from fixation on outcomes to immersion in process. Rather than measuring worth by results, they teach us “to focus on effort, refining our intentions and actions while remaining detached from external outcomes.” This ancient wisdom may be the most relevant philosophical framework for an age of AI supremacy.

Consider: a chess player who measures herself against Stockfish will always lose. But a chess player who measures herself against yesterday’s version of herself—who finds meaning in the quality of her attention, the depth of her thought, the beauty of her combinations—is playing a game the machine cannot even enter.

Focusing on process over results, especially with the smallest of tasks, can amount to greater outcomes when we are not misguided by obsession over those outcomes.

— On Stoic-Buddhist approaches to process

The Stoic concept of the “dichotomy of control” is directly applicable. Marcus Aurelius would not have been troubled by Stockfish. The machine’s superiority is among the things not in our control; the quality of our engagement with the game is entirely in our control. A chess player can play a beautiful game and lose—to a human or a machine. The beauty is not diminished by the loss. The beauty was never about winning.

This is the deepest lesson chess offers to an anxious world: you do not have to be the best to be meaningful. The value of human chess does not depend on human chess being objectively superior. It depends on human chess being genuinely human—filled with error, insight, emotion, creativity, and the courage to sit down across from a problem you know you cannot perfectly solve, and play anyway.

The Final Position

Chess survived AI supremacy. Not because humans found a way to beat the machines, but because they found something the machines could not take from them: the experience of playing. The game endures not despite human limitation but because of it. Every blunder is proof that a person is sitting at the board. Every sacrifice is an act of creative courage. Every resignation is an acknowledgment that the struggle mattered—not the outcome, but the struggle itself. In this, chess becomes not a cautionary tale about human obsolescence, but a parable about human resilience. We play on.

• • •

“To become good at anything you have to know how to apply basic principles. To become great at it, you have to know when to violate those principles.”

— Garry Kasparov, Deep Thinking

Sources & References