MTD(*f*) is a new minimax search algorithm, simpler and more
efficient than previous algorithms.
In tests with a number of tournament game playing programs for chess, checkers
and Othello it performed better, on average, than NegaScout/PVS (the AlphaBeta
variant used in practically all good chess, checkers, and Othello
programs).
One of the strongest chess programs of the moment, MIT's parallel
chess program Cilkchess uses
MTD(*f*) as its search algorithm, replacing NegaScout, which was
used in StarSocrates, the previous version of the program.

You want proof of how simple MTD(*f*) is? Here's the code (my apologies
to the language purists for mixing C and Pascal; block structure is indicated
by indentation only):

**function** MTDF(*root* : node_type; *f* : integer; *d*
: integer) : integer;

That's right. The entire algorithm is only ten lines of code.

The algorithm works by calling AlphaBetaWithMemory a number of times with a search
window of zero size. The search works by zooming in on the minimax
value.
Each AlphaBeta call returns a bound on the minimax
value. The bounds are stored in *upperbound* and
*lowerbound*, forming an interval around the true minimax value
for that search depth. Plus and minus INFINITY is shorthand for values
outside the range of leaf values.
When both the upper and the lower bound collide, the minimax value is
found.

MTD(*f*) gets its efficiency from doing only zero-window alpha-beta
searches, and using a "good" bound (variable *beta*) to
do those zero-window searches. Conventionally AlphaBeta is called with
a wide search window, as in
AlphaBeta(*root*, -INFINITY, +INFINITY, *depth*),
making sure that the return value lies between the value of alpha and
beta. In MTD(*f*) a window of zero size is used, so that on each
call AlphaBeta will either fail high or fail low, returning a lower
bound or an upper bound on the minimax value, respectively. Zero
window calls cause more cutoffs, but return less information -
only a bound on the minimax value. To nevertheless find it,
MTD(*f*) has to call AlphaBeta a number of times, converging
towards it. The overhead of re-exploring parts of the search tree in
repeated calls to AlphaBeta disappears when using a version of
AlphaBeta that stores and retrieves the nodes it sees in memory.

In order to work, MTD(*f*) needs a "first guess" as to where
the minimax value will turn out to be. The better than first guess is,
the more efficient the algorithm will be, on average, since the better
it is, the less passes the repeat-until loop will have to do to
converge on the minimax value. If you feed MTD(*f*) the minimax
value to start with, it will only do two passes, the bare minimum: one
to find an upper bound of value *x*, and one to find a lower
bound of the same value.

Typically, one would call MTD(*f*) in an iterative deepening framework.
A natural choice for a first guess is to use the value of the previous
iteration, like this:

**function**
iterative_deepening(*root* : node_type) : integer;*
*

In a real program you're not only interested in the value of the minimax tree, but also in the best move that goes with it. In the interest of brevity that is not shown in the above pseudo code.

In case your program has a strong oscillation in the values it finds
for odd and even search depths, you might be better off by feeding MTD(*f*)
its return value of two plies ago, not one, as the above code
does. MTD(*f*) works best with a stable Principal Variation
(doesn't every program?) Although the transposition table greatly
reduces the cost of doing a re-search, it is still a good idea to not
re-search excessively. As a rule, in the deeper iterations of quiet
positions in good programs MTD(*f*) typically performs between 5
and 15 passes before it finds the minimax value.

The name of the algorithm is short for MTD(*n, f*), which
stands for something like Memory-enhanced Test Driver with node
*n* and value *f*. MTD is the name of a group of
driver-algorithms that search minimax trees using zero window
AlphaBetaWithMemory calls. Judea Pearl has named zero
window AlphaBeta calls "Test", in his seminal papers on the Scout
algorithm (the basis for Reinefeld's NegaScout). Adding memory to
Test makes it possible to use it in re-searches, creating a group of
simple yet efficient algorithms.

MTD(*f*) is simple in that it only does zero window AlphaBeta
calls, making reasoning about the parts of the tree that get traversed
easier than with algorithms that use wide window calls, such as
NegaScout and the standard AlphaBeta. Actually, the difficulty in
analyzing ordinary AlphaBeta was precisely the reason why Pearl
introduced his Test in the first place. The AlphaBeta
versions shown on this page can be simplified to use a
single input bound, instead of both alpha and beta, since alpha is
always one less than beta.

Especially in a parallel setting the simplicity of MTD(*f*) compared to
NegaScout is valuable. Designing and debugging a parallel search
routine is a complex affair. MTD(*f*) only needs a zero window
search, a Test. Instead of two bounds, MTD(*f*) needs
one. In NegaScout, when new values for the search window become
available they have to be communicated asynchronously to the child
processes; in MTD(*f*) you simply abort an entire subtree when a cutoff
happens. Furthermore, the recursive search code does not spawn
re-searches anymore. All re-searching is done at the root, where
things are simpler than down in the parallel tree.
The large body of research on
parallelizing AlphaBeta and NegaScout is directly applicable to MTD
instances, since they use zero-window AlphaBeta calls to do the tree
searching. See for example Bradley Kuszmaul's Jamboree
search or Rainer
Feldmann's Young
Brothers Wait Concept. If your AlphaBeta is parallel, then your MTD(*f*) is
parallel.

Incidentally, one of the MTD instances is equivalent to SSS*, George Stockman's best-first minimax algorithm that promised to be more
efficient than AlphaBeta. (By equivalent I mean that the two
algorithms look different, but search the same nodes.) This SSS*-MTD
made the first practical
tests of SSS* in full-fledged game playing programs feasible, shedding
new and unexpected light, after
more than 15 years, on the questions posed
in Stockman's 1979 article. See our 1996 article
in Artificial Intelligence for more on this subject. (And yes,
MTD(*f*) is also better than SSS*, in case you wondered.)

Another instance of the MTD framework is equivalent to the
K. Coplan's C*
algorithm. Jean-Christophe
Weill has published a number of papers on experiments with a
negamax version of C*. In MTD terms the idea of C* is to bisect the
interval formed by the upper and lower bounds, reducing the number of
AlphaBetaWithMemory calls. On the down side, bisection yields a value
for the search window, *beta*, that turns out to be not as
efficient as MTD(*f*)'s choice. But still, Weill's work indicates
that it is worthwhile to experiment with variants on MTD(*f*)'s
choice of pivot value; leaving ample room for more research.

To be sure, here's a minimax version of the pseudo code of AlphaBetaWithMemory. The transposition table access code is the same as what is used in most tournament chess, checkers, and Othello programs.

/* Fail low result implies an upper bound */

/* Found an accurate minimax value - will not occur if called with zero window */

Text books on Artificial Intelligence typically discuss a version of
AlphaBeta that does *not* use memory. Therefore, to avoid any
confusion, and even though the use of transposition tables is standard
practice in the game playing community, the fact that MTD(*f*)
needs a memory-enhanced searcher is stressed here. (Yes, I dislike the
name AlphaBetaWithMemory too. Life would be so much simpler if AI text
books would discuss useful AlphaBeta versions.)
The AlphaBetaWithMemory code is given in the interest of completeness.
If you already have a chess program that uses AlphaBeta or NegaScout
(minimax or negamax make no difference) and it uses
a transposition table, then in all likelihood it will work right
away. In none of the programs that I tried did I have to change the
existing AlphaBeta code (actually, in all cases a negamax version of
NegaScout) to get the transposition table to work properly.

Some programs unroll the search of the root node, for example, to
do extra move sorting at the root. In MTD(*f*)
you can do this too. In the interest of cleanliness this is not shown
in the pseudo code.

Sometimes forward pruning or search extensions that depend on alpha
and beta values react in surprising ways to a search that consists of
zero window calls only. You may have to do some re-tuning to get it
all in synch again. Keep in mind that the
size of the search tree is quite sensitive to the value of the search
window; a strongly oscillating "first guess" is a bad thing. Also, if
it weren't for the transposition table, MTD(*f*)'s re-searches
would cause a lot of overhead. Make sure that the transposition table
works properly. MTD(*f*) does more re-searches than NegaScout,
and tends to take a bigger penalty from a badly functioning transposition
table. Consider storing leaf and quiescence nodes in the table.
Experiment with different sizes. There is, however, a limit beyond
which expanding the table becomes pointless. That limit should be about the
same for NegaScout and MTD(*f*).

Some tips to keep in mind when doing experiments to better understand the behavior of your search algorithm: The size of the search tree can differ significantly from position to position. Use a large test set in your experiments. Try different set-ups, such as: without null-move pruning, without extensions, disregard counting of quiescence nodes, different transposition table sizes and storage schemes, disable some parts of the move ordering. As in all debugging, if you want to understand the search better, the idea is to disable as much smarts as possible, to be able to study the behavior of a clean, noise-less, algorithm. (Sure, this will take a lot of time, but it can be quite rewarding to get to understand your program better.)

To summarize, the core ideas of MTD(*f*) are:

- The narrower the AlphaBeta window, the more cutoffs you get,
the more efficient the search is.
Hence MTD(
*f*) uses only search windows of zero size. - Zero window AlphaBeta calls return bounds. At the root of the tree
the return bounds are stored in
*upperbound*(after AlphaBeta "failed low") and*lowerbound*(after AlphaBeta "failed high"). The bounds delimit the range of possible values for the minimax value. Each time MTD(*f*) calls AlphaBeta it gets a value back that narrows the range, and the algorithm is one step closer to hitting the minimax value. - Storing nodes in memory gets rid of the overhead inherent in multiple re-searches. A transposition table of sufficient size does the job.
- It is more efficient to start the search close to its goal. Therefore
MTD(
*f*) needs (and can make use of) a good first guess.

int NegaScout ( int p, alpha, beta ); { /* compute minimax value of position p */ int a, b, t, i; determine successors p_1,...,p_w of p; if ( w = 0 ) return ( Evaluate(p) ); /* leaf node */ a = alpha; b = beta; for ( i = 1; i <= w; i++ ) { t = -NegaScout ( p_i, -b, -a ); if (t > a) && (t < beta) && (i > 1) && (d < maxdepth-1) a = -NegaScout ( p_i, -beta, -t ); /* re-search */ a = max( a, t ); if ( a >= beta ) return ( a ); /* cut-off */ b = a + 1; /* set new null window */ } return ( a ); }

- The MTD(
*f*) algorithm and the experiments that showed that it worked in practice were described in award winning University of Alberta technical report 94-18, entitled "A New Paradigm for Minimax Search" (a.k.a. Erasmus University report EUR-CS-95-03). - It was subsequently published in a short paper at IJCAI-95 with the highly hyphenated title "Best-First Fixed-Depth Game-Tree Search in Practice".
- MTD(
*f*) was described in a longer article in the November 1996 issue of the Journal Artificial Intelligence entitled "Best-First Fixed-Depth Minimax Algorithms", which also deals with the relation of other MTD instances to SSS*-like best-first minimax algorithms. - The algorithm is one of the topics of my 1996 PhD thesis "Research Re: search & Re-search".

Want a few useful links on computer chess programming? Try these:

- The Anatomy of Chess Programs, by Tony Marsland, is a nice intro on how current chess programs work.
- Yngvi's Chess Page
- Chess Space
- Paul Verhelst
- Bibliography on Minimax Algorithms.
- Crafty is a strong program whose source code is freely available and, for a chess program, quite readable.

If you have a question on MTD(*f*) and minimax research you
are welcome to send me a note. Bear in mind, however, that I have other
duties besides answering questions. The speed of a response increases
with the relevance to my research.
Please take some time to look for answers
yourself, browsing the web or your library.
I am, however, definitely interested in your experience with
MTD(*f*), and in research in search algorithms, and parallel
algorithms in general.

Thanks to Peter Kouwenhoven and Yngvi Bjornsson for suggestions for improvement of this page.

Back to Aske Plaat.

Back to Cilkchess.

Back to Cilk.