Home * Search * Transposition Table

A Transposition Table,
first used in Greenblatt's program Mac Hack VI [1] [2] [3] , is a database that stores results of previously performed searches. It is a way to greatly reduce the search space of a chess tree with little negative impact. Chess programs, during their brute-force search, encounter the same positions again and again, but from different sequences of moves, which is called a transposition.

Transposition (and refutation) tables are techniques derived from dynamic programming [4] , a term coined by Richard E. Bellman in the 1950s, when programming meant planning, and dynamic programming was conceived to optimally plan multistage processes [5] .
Salvador Dalí, The Persistence of Memory 1931

How it works

When the search encounters a transposition, it is beneficial to 'remember' what was determined last time the position was examined, rather than redoing the entire search again. For this reason, chess programs have a transposition table, which is a large hash table storing information about positions previously searched, how deeply they were searched, and what we concluded about them. Even if the depth (draft) of the related transposition table entry is not big enough, or does not contain the right bound for a cutoff, a best (or good enough) move from a previous search can improve move ordering, and save search time. This is especially true inside an iterative deepening framework, where one gains valuable table hits from previous iterations.

Hash functions

Hash functions convert chess positions into an almost unique, scalar signature, allowing fast index calculation as well as space saving verification of stored positions.

Both, the more common Zobrist hashing as well BCH hashing use fast hash functions, to provide hash keys or signatures as a kind of Gödel number of chess positions, today typically 64-bit wide. They are updated incrementally during make and unmake move by either own-inverse exclusive or or by addition versus subtraction.

Address Calculation

The index is not based on the entire hash key because this is usually a 64-bit number, and with current hardware limitations, no hash table can be large enough to accommodate it. Therefor to calculate the address or index requires signature modulo number of entries, for power of two sized tables, the lower part of the hash key, masked by an 'and'-instruction accordantly.


The surjective mapping from chess positions to a signature and an even more denser index range implies collisions, different positions share same entries, for two different reasons, hopefully rare ambiguous keys (type-1 errors), or regularly ambiguous indices (type-2 errors).


The typical cardinalities of positions and signatures inside the search, reflects the likelihood of collisions:

Cardinalities of positions and signatures

Upper bound for the number of reachable chess positions [6]

Different 64 bit keys

Some number of distinct nodes searched per game,
assuming 100 moves times 1e8 nodes per move

Different 32 bit keys

Some arbitrary table size in number of entries

Index Collisions

Index collisions or type-2 errors [7] [8] , where different hash keys index same entries, happen regularly. They require detection, realized by storing the signature as part of the hash entry, to check whether a stored entry matches the position while probing. Specially with power of two entry tables, many programmers choose to trade-off space for accuracy and only store that part of the hash key not already considered as index, or even less.

Key Collisions

Key collisions or type-1 errors are inherent in using signatures with far less bits than required to encode all reachable chess positions. A key collision occurs when two different positions map the same hash key or signature [9] [10] . When storing only a partial key, the chance of a collision greatly increases. To accept only hits where stored moves are pseudo-legal decreases the chance of type-1 errors. On his computer chess page [11], Ken Regan broached on chess engine bugs, anomalies and hash collisions, and mentions a Shredder 9.1 game where a key collision might have caused a strange move [12] [13] .

Bits Required

During the WCCC 1989 Workshop New Directions in Game-Tree Search, James Gillogly, author of Tech, discussed transposition table collisions [14] . He produced the following table using the Birthday Paradox, where the columns are the number of positions stored and the rows are the probability of collision. The entries are the number of bits of combined address and check hash required to reduce the probability of collision to the desired amount.
Number of Positions:
Collision probability:

During the discussion, David Slate and Ken Thompson pointed out that the Birthday Paradox is not applicable to most programs, since the hash table will fill up and not all previous positions will be in the table; thus these figures must be regarded as an upper bound on the number of bits required for safety [15]. The dangers of transposition table collisions were further studied by Robert Hyatt and Anthony Cozzie as published in their 2005 paper Hash Collisions Effect [16]. They gave an surprising answer to the question “Is it really worth all the effort to absolutely minimize signature collisions?”, and concluded that 64 bit signatures are more than sufficient.

What Information is Stored

Typically, the following information is stored as determined by the search [17] :

Table Entry Types

In an alpha-beta search, we usually do not find the exact value of a position. But we are happy to know that the value is either too low or too high for us to be concerned with searching any further. If we have the exact value, of course we store that in the transposition table. But if the value of our position is either high enough to set the lower bound, or low enough to set the upper bound, it is good to store that information also. So each entry in the transposition table is identified with the type of node, often referred to as exact, lower- or upper bound.

Replacement Strategies

Because there are a limited number of entries in a transposition table, and because in modern chess programs they can fill up very quickly, it is necessary to have a scheme by which the program can decide which entries would be most valuable to keep, i.e. a replacement scheme [19] . Replacement schemes are used to solve an index collision, when a program attempts to store a position in a table slot that already has a different entry in it. There are two opposing considerations to replacement schemes:
  • Entries that were searched to a high depth save more work per table hit than those searched to a low depth.
  • Entries that are closer to the leaves of the tree are more likely to be searched multiple times, making the table hits of them higher. Also, entries that were searched recently are more likely to be searched again.
  • Most well-performing replacement strategies use a mix of these considerations.

Always replace

This replacement strategy is very simple, placing all emphasis on the second consideration. Any old entries are replaced immediately when a new entry is stored [20] .

Priority by searched nodes count

This replacement strategy uses number of nodes searched spent to obtain an entry, as replacement priority.

Priority by move ordering position

This replacement strategy uses position of entry move in move-ordering list as replacement priority. The main idea is that if the best move was not considered as good cut-off candidate by move-ordering algorithm, storing it in TT should provide better help for the search.


This replacement strategy puts all emphasis on the first consideration. The only criteria in deciding whether to overwrite an entry is whether the new entry has a higher depth than the old entry.

Two-tier system

This strategy, devised by Ken Thompson and Joe Condon [21] , uses two tables, side by side. For each table slot, there is a depth-preferred and an always-replace entry [22] .

Bucket systems

This family of strategies is similar to the two-tier system, but any number of tiers (known as "buckets") can be used (typically the number is based on the size of a cacheline). The difference is that the buckets are not specific to one consideration, but rather the new entry overwrites the entry in the bucket with the lowest depth [23] .


Aging considers searches of different chess positions during game play. While early implementations and programs relying on root pre-processing to guide search and evaluation were obligated to clear the hash table between root positions, most todays programs do not, to profit from entries of previous searches. Nevertheless, to avoid persistence of old entries which may no longer occur from the current root, aging is used to likely replace those entries by new ones, even if their draft and flags would otherwise protect them. To implement aging, one often stores and compares the current halfmove clock as age, likely modulo some power two constant, depending on how many bits are used to store it inside an entry [24] [25].

TT and Parallel Search

A global transposition table, shared by multiple threads or processes is essential for effective parallel search algorithms on modern multi core cpus, and might be accessed lock-less, as proposed by Robert Hyatt and Tim Mann [26] .

Further Hash Tables

Besides storing the best move and scores of the search trees, further hash tables are often used to cache other features.

Maximizing Transpositions

See also


1967 ...

1970 ...

1980 ...

1990 ...

2000 ...

2010 ...

Forum Posts

1990 ...

1995 ...


2000 ...


2005 ...


2010 ...


2015 ...


External Links


  1. ^ Richard Greenblatt, Donald Eastlake and Stephen D. Crocker (1967). The Greenblatt Chess Program. Proceedings of the AfiPs Fall Joint Computer Conference, Vol. 31, pp. 801-810. Reprinted (1988) in Computer Chess Compendium, pdf from The Computer History Museum or as pdf or ps from DSpace at MIT
  2. ^ Albert Zobrist (1970). A New Hashing Method with Application for Game Playing. Technical Report #88, Computer Science Department, The University of Wisconsin, Madison, WI, USA. Reprinted (1990) in ICCA Journal, Vol. 13, No. 2, pdf
  3. ^ Re: Berliner vs. Botvinnik Some interesting points, post 8 by Bradley C. Kuszmaul, rgcc, November 06, 1996 » Transposition Table in Mac Hack
  4. ^ Algorithms that use dynamic programming from Wikipedia
  5. ^ Sanjoy Dasgupta, Christos H. Papadimitriou, Umesh Vazirani (2006). Algorithms. McGraw-Hill, ISBN: 978-0073523408, amazon, Chapter 6, Dynamic programming
  6. ^ Shirish Chinchalkar (1996). An Upper Bound for the Number of Reachable Positions. ICCA Journal, Vol. 19, No. 3
  7. ^ Albert Zobrist (1970). A New Hashing Method with Application for Game Playing. Technical Report #88, Computer Science Department, The University of Wisconsin, Madison, WI, USA. Reprinted (1990) in ICCA Journal, Vol. 13, No. 2, pdf
  8. ^ Collision probability by Dennis Breuker, rgcc, April 15, 1996
  9. ^ TT Key Collisions, Workarounds? by Clemens Pruell, CCC, August 16, 2011
  10. ^ how to measure frequency of hash collisions by Daniel Shawul, CCC, June 16, 2012
  11. ^ Computer Chess - Hash Collisions in Chess Engines, and What They May Mean... by Kenneth Wingate Regan
  12. ^ Pablo Lafuente vs Shredder (Computer) (2005) from chessgames.com
  13. ^ Current tournaments – Sanjin, Biel, Argentina, Israel, ChessBase News, July 21, 2005
  14. ^ James Gillogly (1989). Transposition Table Collisions. Workshop on New Directions in Game-Tree Search
  15. ^ James Gillogly (1989). New Directions in Game-Tree Search - First Workshop Session. ICCA Journal, Vol. 12, No. 2
  16. ^ Robert Hyatt, Anthony Cozzie (2005). The Effect of Hash Signature Collisions in a Chess Program. ICGA Journal, Vol. 28, No. 3
  17. ^ Dennis Breuker, Jos Uiterwijk, and Jaap van den Herik (1997). Information in Transposition Tables. Advances in Computer Chess 8
  18. ^ MTD(f)- and some PVS-implementations store distinct upper and lower bound scores, rather than one score with flags
  19. ^ Dennis Breuker, Jos Uiterwijk and Jaap van den Herik (1994). Replacement Schemes for Transposition Tables. ICCA Journal, Vol. 17, No. 4
  20. ^ Re: Transposition table usage in quiescent search? by Robert Hyatt, CCC, March 06, 2013
  21. ^ Joe Condon and Ken Thompson (1983). BELLE Chess Hardware. Advances in Computer Chess 3
  22. ^ Dennis Breuker, Jos Uiterwijk and Jaap van den Herik (1996). Replacement Schemes and Two-Level Tables. ICCA Journal, Vol. 19, No. 3
  23. ^ Don Beal and Martin C. Smith (1996). Multiple Probes of Transposition Tables. ICCA Journal, Vol. 19, No. 4
  24. ^ Transposition Age Tutorial by Dennis Sceviour, CCC, January 25, 2016
  25. ^ Hashtable aging by Martin Fierz, CCC, April 25, 2016
  26. ^ Robert Hyatt and Tim Mann (2002). A lock-less transposition table implementation for parallel search chess engines. ICGA Journal, Vol. 25, No. 1
  27. ^ Re: About random numbers and hashing by Sven Reichard, CCC, December 05, 2001
  28. ^ Transposition-driven scheduling - Wikipedia
  29. ^ Transposition driven scheduling by Daniel Shawul, CCC, April 04, 2013
  30. ^ Re. Fail low after fail high by Marcel van Kervinck, CCC, April 05, 2015 » Fail-Low, Fail-High
  31. ^ John Romein, Henri Bal, Jonathan Schaeffer, Aske Plaat (2002). A Performance Analysis of Transposition-Table-Driven Scheduling in Distributed Search. IEEE Transactions on Parallel and Distributed Systems, Vol. 13, No. 5, pp. 447–459. pdf

Up one Level