Persistent+Hash+Table

a form of long-term memory, to remember "important" nodes from earlier analyzed positions or played games with its exact score and depth, in order to avoid repeating unsuccessful book lines. So called [|orthogonal or transparent] persistent hash tables preserve their contents with focus on PV-nodes between moves while playing a game, between games, and even during interactive analysis while playing variations forward and backward. **Non-orthogonal** persistence, primary topic of this page, requires data to be written or read to or from storage devices using specific instructions in a program and have to provide mappings from or to the native data structures to or from the storage device data structures.
 * Home * Learning * Persistent Hash Table**
 * [[image:DisintegrationofPersistence.jpg width="320" link="https://en.wikipedia.org/wiki/The_Disintegration_of_the_Persistence_of_Memory"]] ||~  || **Persistent Hash Table**, ([|persistent] transposition table)

Inspired by the [|rote learning] as used in Arthur Samuel's Checkers program from 1959, David Slate first described a persistent transposition table in computer chess for accumulating selected information from many games and then utilizing it subsequently via the transposition table. In his article, Slate mentions personal communication with Tony Scherzer and Tony Warnock regarding learning in Bebe and Lachex. || toc =Learning in Mouse= Slate's simple brute-force program //Mouse//, a depth-first, full-width iterated alpha-beta searcher with an evaluation purely based on material was used as a learning [|testbed], only remembering positions where a significant score drop occurred at the root.
 * [|The Disintegration of the Persistence of Memory] ||~  ||^   ||

Transposition Table
A relative small transposition table of 4096 buckets (8192 entries) was used for the examples in his paper: code 6  bytes Hash code 2x2 bytes Score window 2  bytes Best Move, if any 10 bits  Game ply 6  bits  Ply from root node 1  byte  Search height 1  byte  Origin indication code

Algorithm
The basic learning algorithm stores root entries to disk, if the final score of the chosen move is significantly worse than the best score in any of the previous iterations. Between searches during the playing session, relevant portions of the retained entries were loaded into their slots in the TT-table, adjusting bounds by a fuzz term, and to flag their origin to secure them from being indiscriminately overwritten.  =Learning in Bebe= Tony and Linda Scherzer, and Dean Tjaden further elaborate on the persistent hash table in their award winning paper concerning Learning in BeBe:

Short Term Memory
The short term memory (STM) or transposition table slot consists of 16 bytes, of which 12 are stored, and 4 bytes are implicit as a memory address. Upper and lower limit of the score are needed for the easiest implementation of the learning algorithm: code 4 bytes Hash code used as STM memory address 4 bytes Hash code used for match verification 2 bytes Search height 2 bytes Position-score lower limit 2 bytes Position-score upper limit 2 bytes The move code

Long Term Memory
The long term memory (LTM) entries are stored on disk and therefor retained between games. The structure is similar, however all 16 bytes were stored: code 4 bytes Hash code used as STM memory address 4 bytes Hash code used for match verification 2 bytes Depth of search 2 bytes Move number 2 bytes Position-score 2 bytes The move code One LTM entry is created for each root node during the game.

Algorithm
The algorithm consists of two phases. One creates (or overwrites) the LTM entries at the end of each search considering a contempt factor, while the second transforms and copies LTM entries to STM at the start of each search: code Position-score lower limit = Position-score - fuzzy tolerance (up to 0.2 pawn units for none draw or mate scores) Position-score upper limit = Position-score + fuzzy tolerance code

=Position Learning in Crafty= Quote from //Crafty Command Documentation// (version 18) by Robert Hyatt :

=See also=
 * Book Learning
 * Hash Table
 * Memory
 * Transposition Table

=Selected Publications=
 * Arthur Samuel (**1959**). //[|Some Studies in Machine Learning Using the Game of Checkers]//. IBM Journal July 1959
 * Arthur Samuel (**1967**). //Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress//. [|pdf]
 * David Slate (**1987**). //A Chess Program that uses its Transposition Table to Learn from Experience.// ICCA Journal, Vol. 10, No. 2
 * Tony Scherzer, Linda Scherzer, Dean Tjaden (**1990**). //Learning in Bebe.// Computers, Chess, and Cognition » Mephisto Best-Publication Award
 * Tony Scherzer, Linda Scherzer, Dean Tjaden (**1991**). //Learning in Bebe.// ICCA Journal, Vol. 14, No. 4

=Forum Posts=

1990 ...

 * [|Machine Learning Experience] by Mike Valvo, rgc, January 22, 1990 » Learning in Bebe

2000 ...

 * [|Simple Learning Technique and Random Play] by Miguel A. Ballicora, CCC, January 18, 2001 » Search with Random Leaf Values
 * [|Re: Rybka 1.01 Beta14 - persistent hash table] by Vasik Rajlich, CCC, February 06, 2006
 * [|Re: Idea for opening book] by Mark Lefler, Winboard Forum, August 02, 2006
 * [|Re: Ed Does it Again! (and a question for Ed)] by Michael Sherwin, CCC, July 03, 2007
 * [|A simple book learning method] by Alvaro Cardoso, CCC, June 12, 2008 » Book Learning
 * [|Persistent Hash] by [|Ted Summers], CCC, July 24, 2008

2010 ...
> [|Re: My "official" request to top engine programmers] by Harm Geert Muller, CCC, July 05, 2017
 * [|Cumulative building of a shared search tree] by Bojun Guo, CCC, December 28, 2016 » Chinese Chess, Opening Book
 * [|My "official" request to top engine programmers] by Rodolfo Leoni, CCC, July 04, 2017
 * [|Improving hash replacing schema for analysis mode] by Daniel José Queraltó, CCC, July 05, 2017 » Replacement Strategy
 * [|Andscacs new PH feature: first impressions] by Rodolfo Leoni, CCC, July 08, 2017 » Andscacs
 * [|Stockfish version with hash saving capability] by Daniel José Queraltó, CCC, July 25, 2017 » Stockfish
 * [|Will you use hash saving?] by Daniel José Queraltó, CCC, July 30, 2017
 * [|Persistent hashes - performance] by Rodolfo Leoni, CCC, December 06, 2017

=External Links=
 * [|Crafty Command Documentation (version 18)] - What is this new Position Learning I've heard about?
 * [|Rybka 3 Persistent Hash] from [|Rybka - for the serious chess player]
 * [|The Persistent Hashtable: A Quick-and-Dirty Database] by [|Greg Travis], [|Developer.com]
 * [|Persistence (computer science) from Wikipedia]
 * [|Rote learning from Wikipedia]

=References= =What links here?= include page="Persistent Hash Table" component="backlinks" limit="80"
 * Up one Level**