The Trouble with Losing at Chess: An Essay

In the second installment of his essay, Tom Chatfield asks, does being human mean being conditioned to losing? Miss the first installment? Read it here.

To talk about machine fooling humans isn’t quite accurate, of course. If we are deceived, it is because other people have built machines intended to deceive us. If we endorse an illusion, it is because we have fooled ourselves into seeing it as truth. And if, eventually, Turing’s test is passed, the supposed divide between illusion and truth collapses—leaving us with the question of whether we call ourselves magic or mechanism. As they begin to replicate more and more human achievements, will our creations reveal our minds to be reproducible in software? Will they gesture beyond us to new kinds of mind—to a world in which we must abandon old conceptions of self?

For a vision of the second of these possibilities, you need look no further than the contemporary cult of the Singularity. Named after the event horizon surrounding the quantum singularity of a black hole—that threshold beyond which not even light can escape—the term was first used by author Vernor Vinge in the 1980s to describe how self-improving artificial intelligence might accelerate beyond humans, past a historical point of no return.

The Singularity offers a strange inversion of Turing’s game: a point at which time and technology dissolve into miracles. Two entities are at play. One is a shadow, a simulacrum, trying to convince its master to treat it as an equal. In the world of the Singularity, humanity is the shadow—trying to show its superiors that it still deserves some measure of consideration. After the Singularity, all old rules cease to apply.

I don’t believe the Singularity is coming, but I do take seriously its vision of technological apotheosis, not least because it draws upon the same fascination that Kempelen’s illusion harnessed: a vision of the future conditioned by games in which there are winners and losers, skill is measured on a single scale, and computation is synonymous with intellect.

What does it mean to play a computer at a game like chess? These days, it means losing. In 1997, humanity’s greatest chess champion, Gary Kasparov, was beaten before the eyes of the watching world by IBM’s Deep Blue. In 2016, Google’s AlphaGo did the same for Go champion Lee Seedol, besting humanity at a game orders of magnitude more complex than chess. In early 2017, an AI called Libratus vanquished the world’s best players at no-limit Texas Hold ‘Em, a game of bluff and imperfect information that some had hoped would remain dominated by humans.

How can we hope for anything other than obsolescence?

This progression points to a fundamental divide between people and machines. Much like athletes pushing up against the boundaries of biology, the increments of human improvement have hard limits. We advance towards a certain threshold in slowing steps. Across rapid generations of software and hardware, meanwhile, machines advance faster and faster. Since 1997, the world’s best human chess players have got perhaps a little better, helped by computers. Meanwhile, the speed at which Deep Blue calculated—around 11.4 gigaflops—has fallen more than an order of magnitude behind the 275 gigaflops powering Samsung’s Galaxy S8 smartphone, a device you can fit in your pocket. Modern supercomputers are many thousands of times faster than those built in 1997, and this trend as yet shows no sign of stopping. The Deep Blue of 1997 would stand about as much chance against today’s supercomputers as a two-year-old would against Kasparov.

Singularity theorist Ray Kurzweil coined the phrase “the second half of the chessboard” to help people conceptualize the staggering properties of this increase. The phrase refers to a mathematical parable, in which a scholar is told by a king that he can name any price as his reward for performing a great service. What I wish for, the scholar replies, is that you place one grain of wheat upon the first square of a chessboard, two upon the second, four upon the third, eight upon the fourth, and so on, until the chessboard is covered.

The king protests that this is too small a prize, but the scholar demurs. By the end of the first row of eight squares, he has 255 grains of wheat. By the time the first half of the chessboard is covered, he has 4,294,967,295 grains—around 280 tons. After this, the first square on the second half of the chessboard will contain as much wheat as the entire first half, and so on, until the wheat required becomes hundreds of times more than exists in the whole world. Once you reach a certain threshold, Kurzweil explains, any ongoing exponential increase demolishes old frames of reference: its sheer scale brings wholly new phenomena, and demands new ways of thinking.

By picking games like chess, humans have defined a terrain in which they are not only destined to lose but are also the architects of their own irrelevance.

How can we hope for anything other than obsolescence in the face of this exponential curve, lashing itself towards infinity? Within the bounds of game-worlds like chess and Go, the Singularity has come and gone. Never again in history will the world’s greatest player be an unaided human. Yet the game is not what it seems. By picking games like chess as both emblems of our rivalry and the ultimate arenas for training machine minds, humans have defined a terrain in which they are not only destined to eventually lose, but are also the architects of their own irrelevance—the creators of rule-bounded spaces within which any suitably-defined victory can be won by automation. Beyond this realm, however, the question of supremacy is not even the right one to ask.

 

Behind accounts of our near future such as Kurzweil’s lies a way of thinking called technological determinism. Determinism offers an account of the world in which the old is driven out by the new, sometimes violently, via mechanisms that nobody needs to have chosen. In warfare, guns beat spears—and people who choose to keep on fighting with spears will sooner or later find themselves on the wrong side of history. In business, advanced autonomous systems beat old-fashioned labour—and corporations who sentimentally refuse to replace their workforce with robots will sooner or later find themselves overtaken. Determinism’s logic is one of ceaseless winner-takes-all games between old and new—a context within which humanity itself is easily viewed as yesterday’s news.

Is this true? Evolution certainly requires no intentions in order to unfold. Yet inevitability, I would argue, exists only in retrospect—in a pattern that our minds project onto the world. We see a Turk seated at a chessboard, and so we also see intent and elemental opposition—a struggle for supremacy complete with winners and losers, old and new. Yet, while a machine may today beat us at chess, it is not actually playing in any recognizable human sense. Somewhere inside its circuits, human players remain hidden—programmers, designers, past grandmasters, creators of a history that has been uncomprehendingly digested and optimized. Our loss is also our victory. The only rules and measures we can lose by are those we have ourselves created.

What isn’t captured by such metrics? As the philosopher Bernard Suits put it in his 1978 book The Grasshopper, a game can also be thought of as “the voluntary attempt to overcome unnecessary obstacles.” We may play a game such as chess in the hope of winning. But this doesn’t make winning its purpose, any more than the purpose of listening to a symphony is getting to the end as fast as possible. Rather, the possibility of victory (and of losing, and of drawing) exists in order to create meaningful play: the exercise of skill and tactics, a pursuit undertaken for its own satisfaction. Constraint create the possibility of play, but it does not constrain the experiences play enables.

As strange it may seem to say it, this is also true of warfare, economics, and the other arenas in which we compete on a daily basis. Victory is the means to ends—power and influence, wealth and glory—that cannot remain meaningful if victory is the only value that matters. Economic rivalry may mean outdoing your rivals, but it also demands common aspirations if there is to be any economy worth succeeding within. The exponential logic of victory after victory does not hold for the human world—not when constraint and common ground are the places that any ultimate purpose resides. It’s our confusion of purpose with conquest, not any inherent property of machines, that’s most likely to destroy us.

Our loss is also our victory. The only rules and measures we can lose by are those we have ourselves created.

In his 1986 book Finite and Infinite Games, the religious scholar James Carse makes the case that “finite” games played for the purpose of winning are secondary to “infinite” games, played for the purpose of continuing further play. In Carse’s account, finite games are preoccupied by the kind of conflicts that determinism puts at the heart of history: power clashes in which faster, harder, bigger and better tools perpetually supplant weaker ones. Infinite games, however, take an interest in the process of play itself—and are alive above all to the possibility of surprises, transforming the trajectory of all that has come before. “To be prepared against surprise is to be trained,” writes Carse, “To be prepared for surprise is to be educated.”

How should we think about the games we play with and through our tools? The philosopher of technology Luciano Floridi uses a very different parable to that of the chessboard to describe our interactions with technology. Imagine a relationship, he writes in his book The Fourth Revolution, in which one partner is accommodating and adaptable, and the other is extraordinarily inflexible. Over time, if the relationship persists and neither partner’s personality changes, they will end up doing more and more things in the way that the less flexible partner insists upon—because their choice is either to do things this way, or not do them at all.

Even the most adaptable machine is orders of magnitude more inflexible than the most rigid human. Once design decisions have been made—once the boundaries of the game together with its incentives have been defined—our creations will be able to maximize its outcomes with ever-greater efficiency. The question is not whether this automatically makes us redundant, but rather whether we have meaningfully debated which incentives we do and do not wish to see relentlessly pursued on our behalf—a debate that can only exist between humans, and that has significance only in our interplay.

Few people learn chess because they wish to be the best in the world; fewer still because they wish to bring the history of chess-playing to a close. Play and learning are themselves the point—the spaces within which value resides. Similarly, when it come to humanity and history, neither our velocity nor our theoretical destination are the metrics that matter most. In constraint, in life and the playing of games, what counts is the experiences we create—and the possibilities we leave behind. ♦

(Image credit: Courtesy of Eureka Entertainment via Flickr.)

The Trouble with Losing at Chess: An Essay

In the first of a two-part essay, writer and tech philosopher Tom Chatfield looks at an early progenitor of artificial intelligence.

 

In 1770, the inventor Wolfgang von Kempelen displayed a mechanical marvel to the Viennese court. Watched by the Archduchess Maria Theresa and her entourage, he opened the doors of a wooden cabinet four feet long, three feet high and just over two feet deep, illuminating its interior by candlelight to display glistening cogs and gears. Seated at the cabinet was a life-size model of a man in Turkish dress—a turban and fur-trimmed robe. In front of the Turk, on top of the cabinet, was a chessboard.

Kempelen closed his cabinet and asked for a volunteer to play a game of chess against the Turk. It was astonishing request. Finely crafted automata had been entertaining royalty for centuries, but the idea that one might undertake an intellectual task such as chess was inconceivable—something for the realm of magic rather than engineering. This was precisely the point. Six months earlier, Kempelen had claimed to the Archduchess that he was utterly unimpressed by magic shows, and could build something far more marvelous himself. The Turk was his proof.

Count Ludwig von Cobenzl, the first volunteer, approached the table and received his instructions: the machine would play white and go first; he must ensure he placed his pieces on the centre of each square. The count agreed, Kempelen produced a key and wound up his clockwork champion, and with a grinding of gears the match began. To its audience’s astonishment, the machine did indeed play, twitching its head in apparent thought before reaching out to move piece after piece. Within an hour, the Count had been defeated, as were almost all the Turk’s opponents during its first years of growing renown in Vienna.

Humanity, for so long self-defined as the pinnacle of nature, had begun to feel less than mighty in the face of its own creations.

A decade later, Maria Theresa’s son, Archduke Joseph II, asked Kempelen to bring his creation to a wider public. The Turk visited Paris, London and Germany, inviting fervent speculation wherever it went. Among its losing opponents were Benjamin Franklin, visiting Paris in 1783, and—under its second owner after Kempelen’s death—the emperor Napoleon in 1809. Napoleon tested the machine with illegal moves, only to see the Turk sweep the pieces off the board in apparent protest.

It was, of course, a fraud—a magic trick masquerading as a mechanism. Behind the cogs and gears lay a secret compartment, from within which a lithe grandmaster could follow the game via magnets attached to the underside of the board, moving the Turk’s arm through a system of levers. In his book The Turk, the British author Tom Standage tells the story in captivating detail—noting that even the unmasking of its workings in the 1820s scarcely diminished the age’s fascination with the Turk. The image of man and machine locked in combat across the chessboard was simply too perfect—and too perfectly matched to a growing unease around technology’s usurpation of human terrain.

Humanity, for so long self-defined as the pinnacle of nature, had begun to feel less than mighty in the face of its own creations. The Industrial Revolution brought fire and steam as well as clockwork into the public imagination, together with anxieties that have echoed across society since: of human redundancy in the face of automation, and human seduction by new kinds of power.

Kempelen’s desire to make not simply a machine but also a kind of magic trick was no accident. The Turk set out to inspire belief, and had picked the perfect arena for persuasion: a bounded zone within which complex questions of ability and intellect were reduced to a single dimension. Sitting down opposite a modern reconstruction of the Turk in Los Angeles, Standage found himself surprised by how “remarkably compelling” the illusion remained, speaking to “its spectators’ deep-seated desire to be deceived.” Tools that can master a task are one thing—but it’s when they are also able to engage and enthrall that enchantment begins.

 

Long before digital computers had gained a genuine mastery of chess, one man devised a twentieth-century game with some remarkable similarities to Kempelen’s scenario. “I propose to consider the question, ‘Can machines think?'” wrote Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” The trouble with such a question, he observed, was that answering it was likely to involve splitting hairs over the meaning of the words “machine” and “think.” Thus, he continued, “I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the ‘imitation game.’”

Turing’s imitation game entailed a conversation between a human tester and two hidden parties, each communicating with the tester via typed messages. One hidden party would be human, the other a machine. If a machine could communicate in this way, such that the human tester could not tell which of their interlocutors was machine or human, then the machine would have triumphed. “The game may perhaps be criticized,” Turing noted, “on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic.” Pretending to be human meant embracing limitations as much as showing strengths.

The answer was likely to involve splitting hairs over the meaning of the words “machine” and “think.”

Turing’s thought experiment suggested that, if the impression created were robust enough, the means of its achievement became irrelevant. If everyone could be fooled all the time, whatever was “really” going on inside the box ceased to matter. A game was the perfect test of intelligence precisely because it abandoned definitions in place of a challenge that could be passed or failed, as well as endlessly restaged. By excluding the world in favor of a staged performance, it made the ineffable conceivable.

Turing’s game is today played for real during the annual Loebner Prize, which since 1991 has promised $25,000 for the first AI that judges cannot distinguish from an actual human—and that convinces its judges that their other, human conversation partner must be a machine. No machine has got close to winning this award, but competing AIs are ranked in order of achievement. To the frustration of many AI specialists, the most successful chatbots tend to use tricks based on stock responses and emotional impact rather than understanding. Much like the grandly dressed Turk two centuries ago, the setup rewards the use of distractions and deceits: bogus bios, pre-programmed typing errors, hesitations, colloquialisms and insults. To win an imitation game, in certain circumstances at least, is not so much about perfect reproduction as targeted mimicry.

This is echoed in the world at large. If and when people are fooled by modern AIs, something more like stage magic than engineering is going on—a fact emphasize by the importance that companies like Apple, Amazon, and Google attach to quirky features which make their creations more appealing. Ask Amazon’s digital personal assistant Alexa whether “she” can pass the Turing test and you’ll get the reply, “I don’t need to pass that. I am not pretending to be human.” Ask Apple’s Siri if “she” believes in God and you’ll be told, “I would ask that you address your spiritual questions to someone more qualified to comment. Ideally, a human.” The responses have been pre-scripted both to amuse and to disarm. They’re meant to fool us into perceiving not intelligence but innocence—products too charming and too useful to provoke any deeper anxiety. The game is not what it claims to be.

Check out part two of Tom Chatfield’s essay here.

loading...
Bitnami