Home * Search * Selectivity * Pruning * Uncertainty Cut-Offs

Uncertainty Cut-Offs,
a safe pruning technique performed in PVS at siblings of a PV-node, which are Cut-nodes, expected to fail-high and therefor to don't improve the score at the parent PV-node. Even more, the fail-high occurs in > 90% from the first move [1] . Sometimes, previous expectations are changing, and an expected Cut-node turns out to become an All-node, yielding in a re-search.
Uncertainty principle [2]

The Idea

The basic idea of Uncertainty Cut-Offs is that after the first and some further promising moves didn't fail-high at an expected Cut-Node, to don't waste time with late moves, to premature return an uncertain score of an All-node, indicated by a boolean flag to the PV-node callee. Uncertainty Cut-Offs were investigated by Yngvi Björnsson and Andreas Junghanns during the mid 90s at University of Alberta, as implemented in their chess program The Turk. The contribution of Yngvi Björnsson, Tony Marsland, Jonathan Schaeffer and Andreas Junghanns was introduced by Björnsson at the Advances in Computer Chess 8 conference 1996 [3], published in the conference proceedings and the ICCA Journal.


A new domain-independent pruning method is described that guarantees returning a correct game value. Even though alpha-beta-based chess programs are already searching close to the minimal tree, there is still scope for improvement. Our idea hinges on the recognition that the game tree has two types of node, those were cut-offs occur and those that must be fully explored. In the latter case one of the moves is best and yields the subtree value, thus for the remaining alternatives one is simply proving their inferiority. This offers the opportunity for pruning, while introducing some potential for uncertainty in the search process. There are two cases of interest. One considers the immediate alternatives to the Principal Variation itself, and here a safe uncertainty cut-off is presented, the second is a proposal for an unsafe generalization, one which offers substantial search reduction but with the potential for control of the error probability. Experiments with the new pruning method show some potential for savings in the search.

Pseudo Code

slightly modified C-like syntax [5]
TreeValue pvs( node, α, β, depth, ply ) {
   if ( depth == 0 ) return evaluate( node );
   next = firstSuccessor( node );
   best = -pvs( next, -β, -α, depth-1, ply+1 );
   next = nextSibling( next );
   while ( next ) {
      if ( bext >= β ) return best;
      α = max( α, best );
      {merit, unsure} = -nws( next, -α, depth-1, ply+1, PV);
      if ( unsure )
         merit =  -pvs( next, -β, -α, depth-1, ply+1 );
      else if ( merit > α && merit < β )
         merit =  -pvs( next, -β, -merit, depth-1, ply+1 );
      best = max( merit, best );
      next = nextSibling( next );
   return best;
{TreeValue, Boolean} nws( node, β, depth, ply, parent ) {
   if ( depth == 0 ) return {evaluate( node ), false};
   next = firstSuccessor( node );
   type = parent == CUT ? ALL : CUT;
   uncertain = false;
   estimate = -oo;
   while ( next ) {
      {merit, unsure} = -nws( next, 1-β, depth-1, ply+1, type);
      if ( unsure ) uncertain = true;
      estimate = max ( merit, estimate );
      if ( merit >= β )
         return {estimate, unsure}; /* CUT node */
      if ( uncertaintyCutoff( estimate, ply,  parent ) )
         return {estimate, true};   /* premature ALL node */
      next = nextSibling( next );
   return {estimate, uncertain};    /* ALL node */
While the above PVS framework applies the backup of uncertainty, the particular cut-off constraints as implemented in uncertaintyCutoff given below does not need to back up uncertainty information. This is because the pruning is only applied if it is guaranteed that the sub-branch will be re-searched.
Boolean uncertaintyCutoff( best, ply, parent ) {
   return ( parent == PV
         && searchStack[ply].movesTried > moveLimit
         && -best > searchStack[ply-1]&& -best < searchStack[ply-1]&& ...


Experiments by Björnsson and Junghanns indicate, even though the savings are small, and this pruning technique does not make much difference in most cases, it can significantly decrease the search effort when move ordering heuristics fail.

See also


External Links


  1. ^ Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin (1996). Exploiting Graph Properties of Game Trees. 13th National Conference on Artificial Intelligence (AAAI-96), Vol. 1, pp. 234-239, ISBN 978-0-262-51091-2. pdf
  2. ^ The gaussian wave function Ψ of an initially very localized free particle, by Thierry Dugnolle, Uncertainty principle from Wikipedia
  3. ^ ICCAJ v.19 n.2 now in North America by Steven Edwards, rgcc, July 03, 1996
  4. ^ Re: ICCAJ v.19 n.2 now in North America by Dennis Breuker, rgcc, July 09, 1996
  5. ^ Yngvi Björnsson, Tony Marsland, Jonathan Schaeffer, Andreas Junghanns (1997). Searching with Uncertainty Cut-offs. ICCA Journal, Vol. 20, No. 1, pdf

What links here?

Up one Level