Shared+Hash+Table

a Hash table or Transposition table which is accessed by various processes or threads simultaneously, running on [|multiple processors] or [|processor cores]. Shared hash tables are most often implemented as dynamically allocated memory treated as global array. Due to [|memory protection] between processes, they require an [|Application programming interface] provided by the [|operating system] to allocate [|shared memory]. Threads may share global memory from the process they are belonging to. || toc =Parallel Search= Almost all parallel search algorithms on SMP- or NUMA systems profit from probing hash entries written by other instances of the search, in its most simple form by instances of a sequential search algorithm which simultaneously search the same root position. The gains come from the effect of nondeterminism. Each processor will finish the various subtrees in varying amounts of time, and as the search continues, these effects grow making the search trees diverge. The [|speedup] is then based on how many nodes the main processor is able to skip from transposition table entries. It had the reputation of little speedup on a mere 2 processors, and to scale quite badly after this. However, the NPS scaling is nearly perfect.
 * Home * Programming * Data * Hash Table * Shared Hash Table**
 * [[image:Dining_philosophers.jpg link="https://en.wikipedia.org/wiki/Dining_philosophers_problem"]] ||~  || **Shared Hash Table**,
 * Dining philosophers problem ||~  ||^   ||

Lazy SMP
//see Main page: Lazy SMP//

Recent improvements by Dan Homan, Martin Sedlak and others on **Lazy** SMP indicate that the algorithm scales quite well up to 8 cores and beyond.

ABDADA
//see Main page: ABDADA//

ABDADA, Alpha-Bêta Distribué avec Droit d'Anesse (Distributed Alpha-Beta Search with Eldest Son Right) is a loosely synchronized, distributed search algorithm by Jean-Christophe Weill. It is based on the Shared Hash Table, and adds the number of processors searching this node inside the hash-table entry for better utilization - considering the Young Brothers Wait Concept.

=Concurrent Access= Due to its size, i.e. 16 or more bytes, writing and reading hash entries are none [|atomic] and require multiple write- and read-cycles. It may and will happen that [|concurrent] writes and reads at the same table address and almost same time results in corrupt data retrieved that causes significant problems to the search. [|Interrupts] may occur between accesses, and there are further nondeterministic issues involved which may cause one thread to read two or more atomic data items, which were written by different threads, searching different positions with the same hash-index due to type-2 errors.

Locks
One common solution to avoid such errors is [|synchronization] using [|atomic locks], and to implement a [|critical section] or [|mutual exclusion].

Cilkchess
As an example, Cilkchess used Cilk's support for atomicity. It uses one lock per hash entry: code format="cpp" typedef struct {  Cilk_lockvar lock; U64 key; U64 data; } ttentry;

ttentry hashtable[TABLESIZE];

void ttinit { for (int i = 0; i < TABLESIZE; ++i) Cilk_lock_init( hashtable[i].lock); }

void update_entry ( ttentry *e, U64 key, U64 data ) { Cilk_lock (e->lock); /* begin critical section */ e->key = key; e->data = data; ...  Cilk_unlock (e->lock); /* end critical section */ } code

Granularity
An important property of a lock is its [|granularity], which is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding, i.e. in the extreme case, one lock for the whole table. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data), like in the Cilkchess sample above, increases the overhead of the locks themselves but reduces lock contention. For a huge transposition table with millions of fairly small entries locks incur a significant performance penalty on many architectures. 

Xor
Robert Hyatt and Tim Mann proposed a lock-less transposition table implementation for 128 bit entries with two atomic quad words, one qword for storing the key or signature, the 64-bit Zobrist- or BCH-key of the position, and one qword for the other information stored, move, score, draft and that like (data). Rather than to store two disjoint items, the key is stored xored with data, while data is stored additionally as usual. According to Robert Hyatt, the original idea came from Harry Nelson somewhere in 1990-1992.

code format="cpp" index = key % TABLESIZE; hashtable[index].key = key ^ data; hashtable[index].data = data; code

Since the retrieving position requires the same key for a probing hit, the stored key xored by the retrieved key must match the stored data.

code format="cpp" index = key % TABLESIZE; if (( hashtable[index].key ^ hashtable[index].data) == key ) {  /* entry matches key */ } code

If key and data were written simultaneously by different search instances with different keys, the error will usually yield in a mismatch of the comparison, except the rare but inherent Key collisions or type-1 errors. As pointed out by Harm Geert Muller, the XOR technique might be applied for any size.

Checksum
For a lock-less shared Hash table with (much) larger entry sizes such as the Pawn Hash Table, one may store an additional [|checksum] of the data, to likely detect errors after retrieving, and to safe the consistence of an entry.

SSE2
x86 and x86-64 SSE2 128-bit read/write instructions might in practice [|atomic], but they are not guaranteed even if properly aligned. If the processor implements a 16-byte store instruction internally as 2 8-byte stores in the store pipeline, it's perfectly possible for another processor to "steal" the cache line in between the two stores. However, Intel states any locked instruction (either the XCHG instruction or another read-modify-write instruction with a [|LOCK prefix]) appears to execute as an indivisible and uninterruptible sequence of load(s) followed by store(s) regardless of alignment.

=Allocation= Multiple threads inside one process can share its [|global variables] or heap. Processes require special [|API] calls to create shared memory and to pass a handle to other processes around for [|interprocess communication]. [|POSIX] provides a standardized API for using shared memory. Linux kernel builds since 2.6 offer /dev/shm as shared memory in the form of a [|RAM disk].

=See also=
 * ABDADA
 * AVX
 * Cilk
 * Lazy SMP
 * Linux
 * Memory
 * Parallel Search
 * SMP Engines
 * SSE2
 * Transposition Table
 * Unix
 * Windows
 * XOP

=Publications=

1980 ...

 * Clyde Kruskal, [|Larry Rudolph], [|Marc Snir] (**1988**). //Efficient Synchronization on Multiprocessors with Shared Memory//. ACM TOPLAS, Vol. 10
 * Henri Bal (**1989**). //[|The shared data-object model as a paradigm for programming distributed systems]//. Ph.D. thesis, [|Vrije Universiteit]

1990 ...
> **Abstract**: //The method of parallelization is based on a suppression of control between the search processes, in favor of a speculative parallelism and full sharing of information achieved through a physically distributed but virtually shared memory. The contribution of our approach for real-time distributed systems and fault-tolerant is evaluated through experimental results//.
 * [|Maurice Herlihy] (**1991**). //Wait-free synchronization//. [|ACM Transactions on Programming Languages and Systems] Vol. 13 No. 1, [|pdf]
 * Vincent David (**1993**). //[|Algorithmique parallèle sur les arbres de décision et raisonnement en temps contraint. Etude et application au Minimax]// = Parallel algorithm for heuristic tree searching and real-time reasoning. Study and application to the Minimax, Ph.D. Thesis, [|École nationale supérieure de l'aéronautique et de l'espace], [|Toulouse], [|France]
 * [|Maged M. Michael], [|Michael L. Scott] (**1995**). //Implementation of Atomic Primitives on Distributed Shared Memory Multiprocessors//. [|HPCA'95], [|pdf]
 * Paul Lu (**1997**). //Aurora: Scoped Behaviour for Per-Context Optimized Distributed Data Sharing//. 11th International Parallel Processing Symposium (IPPS)
 * John Romein, Aske Plaat, Henri Bal, Jonathan Schaeffer (**1999**). //Transposition Table Driven Work Scheduling in Distributed Search//. AAAI-99, [|pdf]

2000 ...

 * Paul Lu (**2000**). //[|Scoped Behaviour for Optimized Distributed Data Sharing]//. Ph.D. thesis, University of Toronto
 * Valavan Manohararajah (**2001**) //Parallel Alpha-Beta Search on Shared Memory Multiprocessors//. Masters Thesis, [|pdf]
 * John Romein, Henri Bal, Jonathan Schaeffer, Aske Plaat (**2002**). //A Performance Analysis of Transposition-Table-Driven Scheduling in Distributed Search//. IEEE Transactions on Parallel and Distributed Systems, Vol. 13, No. 5, pp. 447–459. [|pdf]
 * [|Maged M. Michael] (**2002**). //High Performance Dynamic Lock-Free Hash Tables and List-Based Sets//. [|IBM Thomas J. Watson Research Center], [|pdf]
 * Robert Hyatt, Tim Mann (**2002**). //[|A lock-less transposition table implementation for parallel search chess engines]//. ICGA Journal, Vol. 25, No. 1
 * [|Jiří Barnat], [|Petr Ročkai] (**2007**). //Shared Hash Tables in Parallel Model Checking//. Faculty of Informatics, [|Masaryk University], [|Brno], [|Czech Republic], PDMC 2007, Preliminary Version as [|pdf]
 * [|Vivek Sarkar] (**2008**). //Shared-Memory Parallel Programming with OpenMP//. [|Rice University], [|slides as pdf]
 * [|Jouni Leppäjärvi] (**2008**). //A pragmatic, historically oriented survey on the universality of synchronization primitives//. [|pdf]

2010 ...

 * [|Alfons Laarman], [|Jaco van de Pol], [|Michael Weber] (**2010**). //[|Boosting Multi-Core Reachability Performance with Shared Hash Tables]//. Formal Methods and Tools, [|University of Twente], [|The Netherlands], [|pdf]
 * [|John Mellor-Crummey] (**2011**). //Shared-memory Parallel Programming with Cilk//. [|Rice University], [|slides as pdf] » Cilk
 * [|Anthony Williams] (**2012**). //[|C++ Concurrency in Action: Practical Multithreading]//.

=Forum Posts=

1997 ...

 * [|Parallel searching] by Andrew Tridgell, rgcc, March 22, 1997 » KnightCap
 * [|CilkChess question for Don] by Robert Hyatt, CCC, January 31, 1999 » CilkChess

2000 ...

 * [|Re: Atomic write of 64 bits] by Frans Morsch, [|comp.lang.asm.x86], September 25, 2000

2005 ...

 * [|multithreading questions] by Martin Fierz, CCC, August 08, 2007
 * [|If making an SMP engine, do NOT use processes] by Zach Wegner, CCC, February 07, 2008
 * [|threads vs processes] by Robert Hyatt, CCC, July 16, 2008
 * [|threads vs processes again] by Robert Hyatt, CCC, August 05, 2008
 * [|SMP hashing problem] by Robert Hyatt, CCC, January 24, 2009
 * [|Interlock clusters] by Steven Edwards, CCC, January 25, 2009

2010 ...

 * [|lockless hashing] by Daniel Shawul, CCC, February 07, 2011
 * [|On parallelization] by Onno Garms, CCC, March 13, 2011
 * [|cache alignment of tt] by Daniel Shawul, CCC, March 11, 2012
 * [|Speaking of the hash table] by Ed Schroder, CCC, December 09, 2012
 * [|Lazy SMP] by Julien Marcel, CCC, December 27, 2012 » Lazy SMP
 * [|Lazy SMP, part 2] by Daniel Homan, CCC, January 12, 2013
 * [|Multi-threaded memory access] by ThinkingALot, OpenChess Forum, February 10, 2013 » Memory, Thread
 * [|Lazy SMP, part 3] by Daniel Homan, CCC, March 09, 2013
 * [|Shared hash table smp result] by Daniel Shawul, CCC, March 21, 2013
 * [|Transposition driven scheduling] by Daniel Shawul, CCC, April 04, 2013
 * [|Lazy SMP and Work Sharing] by Dan Homan, CCC, July 03, 2013 » Lazy SMP in EXChess
 * [|Lockless hash: Thank you Bob Hyatt!] by Julien Marcel, CCC, August 25, 2013
 * [|How could a compiler break the lockless hashing method?] by Rein Halbersma, CCC, December 08, 2013
 * [|Parallel Search with Transposition Table] by Daylen Yang, CCC, March 27, 2014 » Parallel Search
 * [|Two hash functions for distributed transposition table] by Daniel Shawul, CCC, December 16, 2014

2015 ...

 * [|Lazy SMP in Cheng] by Martin Sedlak, CCC, February 02, 2015 » Cheng
 * [|Trying to improve lazy smp] by Daniel José Queraltó, CCC, April 11, 2015
 * [|lazy smp questions] by Lucas Braesch, CCC, September 09, 2015 » Lazy SMP
 * [|atomic TT] by Lucas Braesch, CCC, September 13, 2015
 * [|lazy smp questions] by Marco Belli, CCC, December 21, 2015 » Lazy SMP
 * 2016**
 * [|NUMA 101] by Robert Hyatt, CCC, January 07, 2016 » NUMA
 * [|Lazy SMP - how it works] by Kalyankumar Ramaseshan, CCC, February 29, 2016 » Lazy SMP
 * [|lockless hashing] by Lucas Braesch, CCC, May 10, 2016
 * [|What do you do with NUMA?] by Matthew Lai, CCC, September 19, 2016 » NUMA
 * 2017**
 * [|Question about parallel search and race conditions] by Michael Sherwin, CCC, September 11, 2017 » Parallel Search

=External Links=

Shared Memory
include page="Shared Memory to include"

Cache
include page="Cache to include"

Concurrency and Synchronization
include page="Synchronization to include"

Distributed memory
> [|Koorde] based on [|Chord] and De Bruijn sequence
 * [|Distributed memory from Wikipedia]
 * [|Distributed hash table from Wikipedia]
 * [|Transposition-driven scheduling - Wikipedia]
 * [|The Aurora Distributed Shared Data System] by Paul Lu

Misc
> media type="custom" key="23673090"
 * [|Cilk from Wikipedia] » Cilk
 * [|The Cilk Project] from MIT
 * [|Fetch-and-add from Wikipedia]
 * [|Intel Cilk Plus from Wikipedia]
 * [|XMTC from Wikipedia]
 * Ian Carr with [|Nucleus] - Bedrock Deadlock, Solar Plexus 1971, feat. Karl Jenkins, [|YouTube] Video

=References= =What links here?= include page="Shared Hash Table" component="backlinks" limit="80"
 * Up one Level**