Karthik Tadepalli

Economics PhD student at UC Berkeley

Chess, AI Safety, and Sally's Marble

Published March 22, 2019


I’ve followed the development of chess engines and chess AI for a while now, and right now it appears that interest has plateaued. The general attitude seems to be that AlphaZero conquered Everest, the show is over, and we can all pack up and move on to the next game. This is in no small part caused by DeepMind’s branding of AlphaZero as a generalist game AI that can learn and excel at any game.

But chess is more than it was in the 1990s. Horde Chess, Crazyhouse, Bughouse, three player chess, four player chess, Chess960 and many more variants have evolved and become more popular - in part because better online interfaces enable them, but also in part as a response to the rise of computer chess.

Many of these, while more tricky and creative than standard chess, are still essentially simple chess from a computer’s perspective. They involve dramatically more moves to consider, but the computational problem - optimize the game state given the available strategies - is the same as in standard chess. These variants are rightfully considered to be uninteresting to develop more on.

But one of these variants is not like all the others, and it might even be critical for the development of AI systems that are going to affect society.


Here are three questions. Two of them are well-studied, and the third is not.

  1. How does an agent learn from its environment? This is well-studied in artificial intelligence and machine learning.
  2. How do agents strategically interact? This is well-studied in game theory.
  3. How do agents learn in environments with other strategic agents? This is a simple combination of the first two questions, but it produces a much deeper question that neither game theory nor AI have good answers to yet.

Modibo Camara’s excellent blog post summarizes the issue nicely. The basic concern is that AI systems which are largely trained in isolation, or in natural environments, may react unpredictably to interaction with other AI systems. A funny example is the interactive pricing algorithms that created Amazon’s $23 million book. But tomorrow this could be AI traders responding to other AI traders in financial markets, or a self-driving car on a road with twenty other self-driving cars.


Bughouse is a tag-team chess variant wherein when you capture a piece on your board, you can pass that piece to your partner who can drop that piece on the board anywhere they wish to. The tag-team that wins on any single board wins the game.

The “drop a piece on the board” element of Bughouse makes it strategically interesting, and is partly why Bughouse strategies include devious moves like stalling play on one board while winning on the other so as to starve the opponent of pieces. But the real gold in Bughouse is that you have to play with a partner.

It’s worth emphasizing that again. So far, most prominent game AI development has been monolithic. An AI can control a system? Very cool, have some press coverage. But the most significant element of Bughouse is that a player has to account for the play of a partner they cannot control. This is categorically different from accounting for the play of an adversary, which simply requires calculating the optimal move: if your adversary makes a suboptimal move, you laugh your way to an advantage. But if your partner makes a suboptimal move, you have to adapt.

Playing Bughouse does not inherently require a deviation from how chess AIs currently play: it is possible to make a best-response to the boards in front of you without really accounting for your partner’s future play. But that’s almost certainly not optimal: predicting your partner’s play effectively is a key part of Bughouse strategy.


In a way, this framework is a nod to a classical conception of “intelligence”. Before hegemonic computer scientists declared that intelligence was computational intelligence, cognitive scientists modelled intelligence after the human brain and human functionality. One of the most fundamental functions of the human brain is mentalizing: understanding what other people think. Mentalizing other people’s beliefs on the state of the world is very distinct from understanding the state of the world. The Sally-Anne test, a psychological test for whether a child can mentalize successfully, demonstrates this:

Sally takes a marble and hides it in her basket. She then leaves the room and goes for a walk. While she is away, Anne takes the marble out of Sally’s basket and puts it in her own box. Sally then returns. Where will Sally look for her marble?

We know that Sally will look in her basket, even though the marble is not in her basket. We know this because in addition to knowing the true state, we know Sally’s beliefs about the true state. Naively discovering the true state is not enough to answer the question correctly.

The widespread application of artificial intelligence research to statistical and optimization problems has been a boon in many respects, but in some ways it has only kicked the can down the road: to reap the benefits of optimization in social systems, an AI system has to be able to mentalize other agents in a social system. A car cannot drive optimally without modelling the beliefs and expectations of all other cars on the road, and it cannot do that if it does not know where Sally will look for her marble.


Virtually no prominent game AI development focuses on mentalizing. Chess, go, shogi, and Jeopardy are all games that don’t require an AI to mentalize other players.

A notable and underrated example of a game AI that does mentalize other players is the Ginsberg Intelligent Bridgeplayer (GIB), an AI that plays the card game of bridge. Bridge is played with a partner against another team of two, analogous to Bughouse.

If you play bridge, then GIB’s documentation is quite fascinating to read. The short version: GIB relies primarily on following popular bridge conventions: since bridge is a game of incomplete information, many plays are conventions that signal information to one’s partner. Very roughly speaking, GIB’s approach is to adopt a convention and assume that the partner and opponents are following the same convention.

GIB is certainly no world champion, though: Bridge Base Online simulated tournaments and found that GIB usually scored solidly above average, and my mother frequently complains about the frustrations of partnering with GIB. Furthermore, Bughouse is far more dynamic than bridge, where convention governs most parts of play.

But the development of a skilled Bughouse AI could still be grounded in the success of GIB. The kernel of a good research agenda is there, and it would be quite interesting to see what an optimal Bughouse AI could teach self-driving cars and AI traders about not breaking everything.