(Updated January 2006)
Wythoff's game is an impartial game played with two piles of tokens. On each turn, a player removes either an arbitrary number of tokens from one pile (between one token and the entire pile), or the same number of tokens from both piles. The game ends when both piles become empty. The last player to move is the winner.
Wythoff's game can be represented graphically with a quarter-infinite chessboard, extending to infinity upwards and to the right. We number the rows and columns sequentially 0, 1, 2, .... A chess queen is placed in some cell of the board. On each turn, a player moves the queen to some other cell, except that the queen can only move left, down, or diagonally down-left. The player who takes the queen to the corner wins.
This article is based on my M.Sc. thesis. Look in there for the full details.
The P- and N-positions of Wythoff's game
Willem A. Wythoff showed in 1907 that the P-positions of this game have a very elegant formula. Let φ = (1+√5)/2 be the golden ratio, and let  denote the floor function. Then the P-positions are exactly those of the form
([φ n], [φ2 n]) or ([φ2 n], [φ n]),
for n = 0, 1, 2, ... . All other positions are N-positions.
The colored entries illustrate how this matrix is calculated. Each entry G(x, y) equals the smallest nonnegative integer that does not appear among the entries directly reachable from (x, y) according to the game's rules. So the red entry 7 is the smallest nonnegative integer that does not appear among the green entries 0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12.
Note that the zeros of the Grundy function are the P-positions of the game, which we have seen already.
Throughout this article we use the convention that cell (x, y) denotes the cell at row x and column y. (This is the opposite of the conventional order of coordinates. We apologize...)
Some facts about G
At first look the function G appears quite chaotic. In fact, nobody knows of a simple, closed formula that gives G(x, y) in terms of x and y. Nevertheless, we do know several nice properties of G. We already saw one of them: Wythoff's formula for the zeros of G.
Here are some further properties of G:
Every row (and column) contains each nonnegative integer exactly once.
The same is true of the diagonals parallel to the main diagonal: Every such diagonal contains every nonnegative integer exactly once.
Every row (and column) is additively periodic. Meaning, for every row x there exists a pre-period n and period p such that G(x, y + p) = G(x, y) + p for all y ≥ n. For example, here is the additive periodicity of row 3 illustrated:
The sequences of g-values
We will now talk about another property of G.
Let g be a nonnegative integer. Let us take all the g-values in the matrix G, and discard all those that appear above the main diagonal, retaining only those that are located on or right of the main diagonal.
Arrange these g-values according to increasing row. For example, here are the first 4-values:
We denote the n-th g-value in the sequence by pgn = (agn, bgn). We start our sequences at n = 0. Thus, from the above picture we see:
p40 = (a40, b40) = (0, 4)
p41 = (a41, b41) = (1, 3)
p42 = (a42, b42) = (2, 5)
p43 = (a43, b43) = (6, 7)
p44 = (a44, b44) = (8, 8)
We already saw that the 0-values are given by p0n = (a0n, b0n) = ([φ n], [φ2 n]).
We call the entries above the main diagonal (gray in the above picture) reflected g-values.
Here are the first 20-values (filled squares) shown alongside the first 0-values (framed squares):
A pattern is immediately evident: The 20-values seem to lie within a strip of constant width around the 0-values. In fact, more than this is true: The n-th 20-value in the sequence is within a constant distance to the n-th 0-value in the sequence! This is true in general:
Theorem 1: For every g there exists a constant kg such that for all n, the n-th g-value is within Manhattan distance kg of the n-th 0-value.
We now sketch the proof of this theorem.
The greedy algorithm for finding the g-values
Given an integer g, we can find all the 0-, 1-, 2-, ..., g-values of G without calculating any larger Grundy values. The basic idea is to use a greedy algorithm, in which we place each Grundy value in the first legal place available, while going through the values 0 through g in increasing order.
The algorithm is as follows. We work row by row, finding for each row all its h-values, 0 ≤ h ≤ g, which by definition lie on or right of the main diagonal. Whenever we find an h-value, we reflect it with respect to the main diagonal, producing a reflected h-value in a higher row, left of the main diagonal.
Suppose we have already found all the h-values in rows 0 through r – 1, and we now have to calculate row r. We do the following successively for h = 0, 1, 2, ..., g:
If row r already contains an h-value left of the main diagonal, then it is clearly illegal to place an h-value right of the main diagonal, so we skip this value of h for this row:
Otherwise, we place an h-value on or right of the main diagonal, in the leftmost legal cell we find. A given cell C could be illegal for one of three reasons:
There is already an h-value down on C's diagonal.
There is already an h-value down on C's column.
C itself is already occupied by a k-value, k < h.
In the above picture, the first cell starting from the main diagonal is illegal for the first reason. The second cell is illegal for the second reason, and the third cell for the third reason. The fourth cell, however, is legal, so we place in it an h-value.
Once we have placed an h-value, we reflect it with respect to the main diagonal into a higher row, as shown.
Occupying the diagonals
Let us number the diagonals parallel to the main diagonal 0, 1, 2, ..., starting from the main diagonal, as follows:
Let dgn = bgn – agn denote the diagonal occupied by the n-th g-value, and let us look at how the g-values, for a given g, occupy the diagonals. For example, here is how the 4-values occupy the diagonals:
As we can see, the 4-values seem to occupy the diagonals roughly in sequential order. This turns out to be true for all g:
Theorem 2: For every g the g-values occupy the diagonals roughly in sequential order. Namely, for every g there exists a constant δg such that for all n,
|dgn – n| ≤ δg.
For comparison, the 0-values occupy the diagonals exactly in sequential order:
Our first step in proving Theorem 1 is to prove Theorem 2.
We say that g-value pm skips diagonal e if pm lies right of e, while e has not yet been occupied by any g-value preceding pm:
By the greedy algorithm described above, pm must have skipped diagonal e for one of two reasons: Either the intersection of pm's row with diagonal e was already occupied by an h-value, h < g, or there is already a g-value vertically below this intersection.
To prove Theorem 2, it suffices to show
an upper bound on the number of diagonals a given g-value can skip, and
an upper bound on the number of g-values that can skip a given diagonal, before the diagonal is finally occupied.
We omit the details, but it can be shown that:
Lemma: No g-value can skip more than 2g diagonals.
Lemma: No diagonal can be skipped by more than 16g g-values.
The latter lemma is proved using the following result:
Lemma: An interval of k consecutive rows must contain at least (k/3 – 2g) g-values.
Therefore, we get the following numerical bounds for Theorem 2:
–16g ≤ dgn – n ≤ 2g.
A somewhat "general" theorem
Let us now take a look at the sequences of first and second coordinates of the g-values:
A = (ag0, ag1, ag2, ...),
B = (bg0, bg1, bg2, ...).
Putting together everything we know, we get that A and B satisfy the following properties:
A is a strictly increasing sequence.
B has no repeated elements.
A and B together cover all the nonnegative integers.
There is exactly one nonnegative integer that appears in both sequences. (This corresponds to the g-value occupying the main diagonal.)
The expression |bgn – agn – n| is bounded for all n.
This last property follows from Theorem 2.
It turns out that these properties are enough to imply Theorem 1. We state this in the form of a somewhat "general" theorem, for which we skip the proof:
Theorem 3: Let A = (a0 < a1 < a2 < ...) be a sequence of increasing nonnegative integers, and let B = (b0, b1, b2, ...) be a sequence of distinct nonnegative integers. Suppose the following conditions hold:
The intersection of A and B is finite.
The union of A and B is the set of all nonnegative integers.
The expression |bn – an – n| is bounded for all n.
Then the expressions |an – φn| and |bn – φ2 n| are bounded for all n, where φ is the golden ratio.
Theorem 3 clearly implies Theorem 1.
Calculating far-out g-values
We think that the trillionth g-values for g = 0, 1, ..., 20 are given as follows:
As we said, these should be considered as conjectures; we do not have absolute proof (only for the case g = 0, which can be easily verified).
Now, it is clear that row 1012 is way beyond the reach of the greedy row-by-row algorithm. So how in the world did we come up with these numbers? And why do we think that they are correct?
The answer lies in a recursive technique, which depends on an unproven convergence property.
A recursive algorithm
Suppose we are given an interval [r1, r2] of rows, and we wish to calculate all the 0-, 1-, ..., g-values that lie within this interval. We do this using the following steps:
We start our calculation a few rows below r1. Namely, for some appropriate constant Rg, we let r0 = r1 – Rg, and we will attempt to calculate the h-values (0 ≤ h ≤ g) that lie within the interval [r0, r2].
We first calculate all the reflected h-values (lying left of the main diagonal) that lie within the interval of rows [r0, r2]. We do this by taking an appropriate range of columns [r′1, r′2] (where roughly r′1 ≈ φ–1r0 and r′2 ≈ φ–1r2) that is guaranteed to contain all these reflected h-values, and making a recursive call on [r′1, r′2]:
(Of course, calculating these reflected h-values is equivalent to calculating their unreflected originals.)
We assume that the recursive call also gives us, for each h, how many reflected h-values lie left of column r′1.
Next we wish to know how many h-values lie below row r0. We calculate this by working out how many reflected h-values lie below row r0, and then complementing to r0 + 1.
How many reflected h-values lie below row r0? All those that lie left of column r′1 must lie below row r0 (by the choice of r′1). And possibly some of those that lie between columns r′1 and r′2 (which we have already calculated) also lie below row r0. (All those lying right of column r′2 must lie above row r2, so they also lie above row r0.)
Denote the number of h-values lying below row r0 by kh.
We assume without justification that the kh h-values lying below row r0 occupy exactly the first kh diagonals, all other diagonals being empty:
We also assume that none of the cells on row r0 lying on empty diagonals have an h-value vertically below.
Starting from this state of row r0, and using the reflected h-values on rows [r0, r2] that we already calculated, we run the greedy algorithm sequentially from row r0 to row r2, knowing exactly which h-values to insert in each row.
We hope that, despite having initialized row r0 to an incorrect state, at some point before row r1 the greedy algorithm will converge to the correct state. (By the state of a row we mean, roughly speaking, which cells are occupied diagonally and which are occupied vertically in that row, for each 0 ≤ h ≤ g.)
Once the greedy algorithm converged to the correct state at a certain row, the state of all subsequent rows will necessarily be correct (since the state of row r depends only on the state of row r – 1 and on which h-values are inserted in row r – 1).
We output the h-values we obtained for interval [r1, r2].
If convergence did indeed occur by row r1 (and if the recursive calculation of interval [r′1, r′2] was also correct), then our calculation for interval [r1, r2] will necessarily be correct.
This is the recursive algorithm. To find the n-th g-value for a given n, all we have to do is run this algorithm on an appropriate interval around φn.
As we can see, the only weak point of our recursive algorithm is the assumption that, for a suitably chosen Rg, the convergence that we described will always occur within at most Rg rows, no matter what our starting row r0 is.
We can test this assumption experimentally for different values of g. We do this as follows:
Choose some integer rmax, and for each r, 0 ≤ r ≤ rmax, precalculate, using the greedy algorithm, the true state of row r and the set of h-values (0 ≤ h ≤ g) to insert in row r.
Go back and test for each r0, 0 ≤ r0 ≤ rmax, how many rows convergence takes when starting from row r0. Meaning, initialize the state of row r0 as having, for each 0 ≤ h ≤ g, all the occupied diagonals before all the empty diagonals (with the appropriate number of occupied diagonals), and no cells occupied vertically.
Starting from this state of row r0, run the greedy algorithm row by row, inserting in each row the appropriate h-values, until we find that at some higher row r1 the state of r1 we are holding equals the correct, precalculated state of r1.
The difference r1 – r0 is the number of rows it takes for convergence to occur when starting from row r0.
We ran the above test for g = 0, 1, 2, and 3 using rmax = 107. Our findings are as follows:
For g = 0 (not surprisingly) convergence always takes 0 rows, i.e. convergence is immediate.
For g = 1 the maximum time to convergence found was 45 rows. In fact, up to row 107 there are 3019 instances of convergence taking 45 rows.
For g = 2 the maximum found was 72 rows. Up to row 107 there are 91 instances of convergence taking 72 rows.
For g = 3 there is one instance of convergence taking as many as 140 rows.
For higher values of g we only ran the test up to rmax = 106. The maximum numbers of rows to convergence found are as follows:
The Convergence Conjecture is:
Conjecture: For every g there exists an upper bound Rg on the number of rows to convergence, when starting from any row r0.
Of course, no amount of experimental testing of the kind described above will prove this conjecture for any g.