In most of the nodes we need just a bound, proving that a move is unacceptable for us or for the opponent, and not the exact score. This is needed only in so-called principal variation - a sequence of moves acceptable for both players (i.e. not causing a beta-cutoff anywhere in the path) which is expected to propagate down to the root. If a lower-depth search has already established such a sequence, finding a series of moves whose value is greater than alpha but lower than beta throughout the entire branch, the chances are that deviating from it will do us no good. So in a PV-node only the first move (the one which is deemed best by the previous iteration of an iterative deepening framework) is searched in the full window in order to establish the expected node value.

When we already have a PV-move (defined as the move that raised alpha in a PV-node) we assume we are going to stick with it. To confirm our belief, a null- or zero window search centered around alpha is conducted to test if a new move can be better. If so, with respect to the null window but not with respect to the full window, we have to do a re-search with the full normal window. Since null window searches are cheaper, with a good move ordering we expect to save about 10% of a search effort.

Bruce Moreland's PVS implementation waits until a move is found that improves alpha, and then searches every move after that with a zero window around alpha ^{[2]} . The alpha improvement usually occurs at the first move, and always at the leftmost nodes (assuming from left to right traversal) with a most open alpha-beta window of +-oo. In re-searches or with aspiration-windows the first moves may rarely not improve alpha. As pointed out by Edmund Moshammer, Gian-Carlo Pascutto, Robert Hyatt and Vincent Diepeveen^{[3]} , it is recommend to only search the first move with an open window, and then every other move after that with a zero window. A further improvement (similar to that known from the NegaScout algorithm) is possible. Since there is not much to be gained in the last two plies of the normal search, one might disable PVS there, but programs respond differently to that change.

An interesting implementation of the alpha-beta algorithm treats the first variation in a special way. The method was originally called Palphabeta [FISH80] and then renamed Calphabeta [FISH81], but will be referred to here as principal variation search or PVS for short.

Despite the publications, PVS was already used in 1978, as mentioned by Robert Hyatt^{[10]} :

I first used PVS in 1978, quite by accident. Murray Campbell and I were discussing this idea at the ACM event in Washington, DC. We were running on a Univac, and I suggested that we dial up my local Vax box and make the changes and see how it works. It looked pretty good, with the only odd thing being fail highs on very minor score changes, which was OK. The next round, our Univac developed a memory problem and I switched back to the vax and had a few exciting moments when we came out with a Nxf7!! sort of output, only to see the score rise by 2 or 3 millipawns. Even in 1978 we just searched the first move with normal alpha/beta and then went into the null-window search, just as the code exactly does in Crafty...

I was thinking about what goes wrong if you start the entire search with a too-narrow window. If the beta value is too low, then one of the children of the root might fail high, and you wouldn't know the proper windows to give to the subsequent children of the root. Wait a minute... what if there aren't any subsequent children, i.e. what if the child that failed high was the last child of the root? Then you don't care about the subsequent windows, and in fact you've just proved that the last child is the best move. So when you're on the last child of the root, go all the way by bringing beta down to alpha+1. I was trying to get this published starting in Aug. 1979, and it finally appeared as "An optimization of alpha-beta search" in SIGART bulletin Issue 72 (July 1980)^{[11]} . After that came various generalizations where the null window is used generally in the search, also the fail-soft algorithm. I was somewhat disappointed in the speedup (or lack thereof) that I measured on checkers lookahead trees. However when I went to work at Bell Labs in 1981, Ken Thompson told me that he had read the SIGART paper, and had sped up Belle by 1.5x with null windows.

and subsequently some details about Belle's PVS-implementation ...

The PVS algorithm in Belle did not do a second search at the root until a second fail high occurred. I don’t know whether or not this idea appears in the literature or not. I would hope it does, but I haven’t been following the literature for about 25 years. In other words, Belle is cleverly going for broke: it knows it’s got a high failure, which is the best move so far, but as long as it doesn’t get a second high failure, the first high failure remains the best move, and it can still avoid doing any more full searches.

PVS and NegaScout

Most PVS implementations are similar to Reinefeld'sNegaScout^{[12]}^{[13]} , and are used by most todays chess programs. It is based on the accuracy of the move ordering. Typically, modern chess programs find fail-highs on the first move around 90% of the time. This observation can be used to narrow the window on searches of moves after the first, because there is a high probability that they will be lower than the score of the first move.

Reinefeld's original implementation introduces one additional variable on the stack (only b, since after a = alpha, alpha is not needed any longer), for a slightly simpler control structure than PVS. It has therefor set a new null window at the end of the loop (b = a + 1), but has to consider the move count for the re-search condition though. His implementation trusts the null-window score, even if the re-search doesn't confirm the alpha increase, eventually due to search instability. While re-searching, it uses the narrow window of {score, beta}, while other implementations dealing with search instability, re-search with {alpha, beta}. Practically, due to Quiescence Search, and fail-soft implementations of PVS, the two algorithms are essentially equivalent to each other - they expand the same search tree^{[14]}^{[15]} .

Guido Schimmels

Guido Schimmels in a CCC post on the difference of PVS vs. NegaScout ^{[16]} :

The difference is how they handle re-searches: PVS passes alpha/beta while NegaScout passes the value returned by the null window search instead of alpha. But then you can get a fail-low on the research due to search anonomalies. If that happens NegaScout returns the value from the first search. That means you will have a crippled PV. Then there is a refinement Reinefeld suggests which is to ommit the re-search at the last two plies (depth > 1) - but that won't work in a real program because of search extensions. NegaScout is slightly an ivory tower variant of PVS (IMHO).

PVS:

value = PVS(-(alpha+1),-alpha)if(value > alpha && value < beta){
value = PVS(-beta,-alpha);}

NegaScout:

value = NegaScout(-(alpha+1),-alpha)if(value > alpha && value < beta && depth >1){
value2 = NegaScout(-beta,-value)
value = max(value,value2);}

Search-wise PVS and Negascout are identical (except the deep-cutoffs on the PV you mention), they are just formulated differently. In Negascout the same routine is used for searching both the PV and the rest of the tree, whereas PVS is typically formulated as two routines: PVS (for searching the PV) and NWS (for the null-window searches). Negascout and PVS were developed about the same time in the early '80 (82-83), but independently. I guess, that's part of the reason we know them by different names. Personally, I've always found the PVS/NWS formulation the most intuative, it's easier to understand what's really going on.

Q: What's the different between negascout and PVS ? They look like the same algorithm to me.

They are identical, see note 15 on page 22 of my thesis^{[19]} :We note that the version of principal-variation search as mentioned by Marsland (1986)^{[20]}is identical to the version of negascout as mentioned by Reinefeld (1989)^{[21]} . We use the 1989 reference instead of 1983 ^{[22]} , which was the first source of this algorithm, since the algorithm described in Reinefeld (1983) contains minor errors.Dennis

Pseudo Code

This demonstrates PVS in a fail-hard framework, where alpha and beta are hard bounds of the returned score.

int pvSearch(int alpha, int beta, int depth ){if( depth ==0)return quiesce( alpha, beta );bool bSearchPv =true;for( all moves){
make
if( bSearchPv ){
score =-pvSearch(-beta, -alpha, depth -1);}else{
score =-pvSearch(-alpha-1, -alpha, depth -1);if( score > alpha )// in fail-soft ... && score < beta ) is common
score =-pvSearch(-beta, -alpha, depth -1);// re-search}
unmake
if( score >= beta )return beta;// fail-hard beta-cutoffif( score > alpha ){
alpha = score;// alpha acts like max in MiniMax
bSearchPv =false;// *1)}}return alpha;// fail-hard}

1) it is recommend to set bSearchPv outside the score > alpha condition.

int pvSearch(int alpha, int beta, int depth ){if( depth ==0)return quiesce(alpha, beta);bool bSearchPv =true;for( all moves){
make
if( bSearchPv ){
score =-pvSearch(-beta, -alpha, depth -1);}else{
score =-zwSearch(-alpha, depth -1);if( score > alpha )// in fail-soft ... && score < beta ) is common
score =-pvSearch(-beta, -alpha, depth -1);// re-search}
unmake
if( score >= beta )return beta;// fail-hard beta-cutoffif( score > alpha ){
alpha = score;// alpha acts like max in MiniMax
bSearchPv =false;// *1)}}return alpha;}// fail-hard zero window search, returns either beta-1 or betaint zwSearch(int beta, int depth ){// alpha == beta - 1// this is either a cut- or all-nodeif( depth ==0)return quiesce(beta-1, beta);for( all moves){
make
score =-zwSearch(1-beta, depth -1);
unmake
if( score >= beta )return beta;// fail-hard beta-cutoff}return beta-1;// fail-hard, return alpha}

1) it is recommend to set bSearchPv outside the score > alpha condition.

PVS and Aspiration

When implementing PVS together with the aspiration window, one must be aware that in this case also a normal window search might fail, leaving the program with no move and no PV. (Actually this is the reason why I wrote "When we already have a PV move" and not "searching later moves").

Judea Pearl (1980). Scout: A Simple Game-Searching Algorithm with Proven Optimal Properties. Proceedings of the First Annual National Conference on Artificial Intelligence. Stanford. pdf

Tony Marsland (1983). Relative Efficiency of Alpha-beta Implementations. Procs. 8th Int. Joint Conf. on Art. Intell., pp. 763-766. Kaufman, Los Altos, pdf

^Judea Pearl (1980). Scout: A Simple Game-Searching Algorithm with Proven Optimal Properties. Proceedings of the First Annual National Conference on Artificial Intelligence. Stanford. pdf

Home * Search * Principal Variation SearchPrincipal Variation Search (PVS),an enhancement to Alpha-Beta, based on null- or zero window searches of none PV-nodes, to prove a move is worse or not than an already safe score from the principal variation.

^{[1]}## Table of Contents

## The Idea

In most of the nodes we need just a bound, proving that a move is unacceptable for us or for the opponent, and not the exact score. This is needed only in so-called principal variation - a sequence of moves acceptable for both players (i.e. not causing a beta-cutoff anywhere in the path) which is expected to propagate down to the root. If a lower-depth search has already established such a sequence, finding a series of moves whose value is greater than alpha but lower than beta throughout the entire branch, the chances are that deviating from it will do us no good. So in a PV-node only the first move (the one which is deemed best by the previous iteration of an iterative deepening framework) is searched in the full window in order to establish the expected node value.When we already have a PV-move (defined as the move that raised alpha in a PV-node) we assume we are going to stick with it. To confirm our belief, a null- or zero window search centered around alpha is conducted to test if a new move can be better. If so, with respect to the null window but not with respect to the full window, we have to do a re-search with the full normal window. Since null window searches are cheaper, with a good move ordering we expect to save about 10% of a search effort.

Bruce Moreland's PVS implementation waits until a move is found that improves alpha, and then searches every move after that with a zero window around alpha

^{[2]}. The alpha improvement usually occurs at the first move, and always at the leftmost nodes (assuming from left to right traversal) with a most open alpha-beta window of +-oo. In re-searches or with aspiration-windows the first moves may rarely not improve alpha. As pointed out by Edmund Moshammer, Gian-Carlo Pascutto, Robert Hyatt and Vincent Diepeveen^{[3]}, it is recommend to only search the first move with an open window, and then every other move after that with a zero window. A further improvement (similar to that known from the NegaScout algorithm) is possible. Since there is not much to be gained in the last two plies of the normal search, one might disable PVS there, but programs respond differently to that change.## History

PVS was introduced by Tony Marsland and Murray Campbell in 1982^{[4]}as nomination of Finkel's and Fishburn's routinePalphabeta^{[5]}^{[6]}, in Fishburn's 1981 thesis^{[7]}calledCalphabeta, which in turn is similar to Judea Pearl's Scout^{[8]}^{[9]}:An interesting implementation of the alpha-beta algorithm treats the first variation in a special way. The method was originally calledPalphabeta[FISH80] and then renamedCalphabeta[FISH81], but will be referred to here as principal variation search or PVS for short.Despite the publications, PVS was already used in 1978, as mentioned by Robert Hyatt

^{[10]}:I first used PVS in 1978, quite by accident. Murray Campbell and I were discussing this idea at the ACM event in Washington, DC. We were running on a Univac, and I suggested that we dial up my local Vax box and make the changes and see how it works. It looked pretty good, with the only odd thing being fail highs on very minor score changes, which was OK. The next round, our Univac developed a memory problem and I switched back to the vax and had a few exciting moments when we came out with a Nxf7!! sort of output, only to see the score rise by 2 or 3 millipawns. Even in 1978 we just searched the first move with normal alpha/beta and then went into the null-window search, just as the code exactly does in Crafty...John Philip Fishburn in a note, September 2010:

I was thinking about what goes wrong if you start the entire search with a too-narrow window. If the beta value is too low, then one of the children of the root might fail high, and you wouldn't know the proper windows to give to the subsequent children of the root. Wait a minute... what if there aren't any subsequent children, i.e. what if the child that failed high was the last child of the root? Then you don't care about the subsequent windows, and in fact you've just proved that the last child is the best move. So when you're on the last child of the root, go all the way by bringing beta down to alpha+1. I was trying to get this published starting in Aug. 1979, and it finally appeared as "An optimization of alpha-beta search" in SIGART bulletin Issue 72 (July 1980)^{[11]}.After that came various generalizations where the null window is used generally in the search, also the fail-soft algorithm. I was somewhat disappointed in the speedup (or lack thereof) that I measured on checkers lookahead trees. However when I went to work at Bell Labs in 1981, Ken Thompson told me that he had read the SIGART paper, and had sped up Belle by 1.5x with null windows.and subsequently some details about Belle's PVS-implementation ...

The PVS algorithm in Belle did not do a second search at the root until asecondfail high occurred. I don’t know whether or not this idea appears in the literature or not. I would hope it does, but I haven’t been following the literature for about 25 years. In other words, Belle is cleverly going for broke: it knows it’s got a high failure, which is the best move so far, but as long as it doesn’t get a second high failure, the first high failure remains the best move, and it can still avoid doing any more full searches.## PVS and NegaScout

Most PVS implementations are similar to Reinefeld's NegaScout^{[12]}^{[13]}, and are used by most todays chess programs. It is based on the accuracy of the move ordering. Typically, modern chess programs find fail-highs on the first move around 90% of the time. This observation can be used to narrow the window on searches of moves after the first, because there is a high probability that they will be lower than the score of the first move.Reinefeld's original implementation introduces one additional variable on the stack (only b, since after a = alpha, alpha is not needed any longer), for a slightly simpler control structure than PVS. It has therefor set a new null window at the end of the loop (b = a + 1), but has to consider the move count for the re-search condition though. His implementation trusts the null-window score, even if the re-search doesn't confirm the alpha increase, eventually due to search instability. While re-searching, it uses the narrow window of {score, beta}, while other implementations dealing with search instability, re-search with {alpha, beta}. Practically, due to Quiescence Search, and fail-soft implementations of PVS, the two algorithms are essentially equivalent to each other - they expand the same search tree

^{[14]}^{[15]}.## Guido Schimmels

Guido Schimmels in a CCC post on the difference of PVS vs. NegaScout^{[16]}:The difference is how they handle re-searches: PVS passes alpha/beta while NegaScout passes the value returned by the null window search instead of alpha. But then you can get a fail-low on the research due to search anonomalies. If that happens NegaScout returns the value from the first search. That means you will have a crippled PV. Then there is a refinement Reinefeld suggests which is to ommit the re-search at the last two plies (depth > 1) - but that won't work in a real program because of search extensions. NegaScout is slightly an ivory tower variant of PVS (IMHO).PVS:NegaScout:## Yngvi Björnsson

Quote by Yngvi Björnsson from CCC, January 05, 2000^{[17]}:Search-wise PVS and Negascout are identical (except the deep-cutoffs on the PV you mention), they are just formulated differently. In Negascout the same routine is used for searching both the PV and the rest of the tree, whereas PVS is typically formulated as two routines: PVS (for searching the PV) and NWS (for the null-window searches). Negascout and PVS were developed about the same time in the early '80 (82-83), but independently. I guess, that's part of the reason we know them by different names. Personally, I've always found the PVS/NWS formulation the most intuative, it's easier to understand what's really going on.-Yngvi## Dennis Breuker

Quote by Dennis Breuker from CCC, July 28, 2004^{[18]}:What's the different between negascout and PVS ? They look like the same algorithm to me.They are identical, see note 15 on page 22 of my thesis^{[19]}:We note that the version of principal-variation search as mentioned by Marsland (1986)^{[20]}is identical to the version of negascout as mentioned by Reinefeld (1989)^{[21]}.We use the 1989 reference instead of 1983^{[22]}, which was the first source of this algorithm, since the algorithm described in Reinefeld (1983) contains minor errors.Dennis## Pseudo Code

This demonstrates PVS in a fail-hard framework, where alpha and beta are hard bounds of the returned score.1) it is recommend to set bSearchPv outside the score > alpha condition.

## PVS + ZWS

Often, programmers split PVS inside a pure PV-node search and a separate and a more compact scout search with null windows.1) it is recommend to set bSearchPv outside the score > alpha condition.

## PVS and Aspiration

When implementing PVS together with the aspiration window, one must be aware that in this case also a normal window search might fail, leaving the program with no move and no PV. (Actually this is the reason why I wrote "When we already have a PV move" and not "searching later moves").A state of the art fail-soft PVS implementation, called without aspiration, was posted by Vincent Diepeveen inside the mentioned CCC thread

^{[23]}:## See also

## Publications

## 1980 ...

1980).Asymptotic Properties of Minimax Trees and Game-Searching Procedures.Artificial Intelligence, Vol. 14, No. 21980).Scout: A Simple Game-Searching Algorithm with Proven Optimal Properties. Proceedings of the First Annual National Conference on Artificial Intelligence. Stanford. pdf1980).Parallel Alpha-Beta Search on Arachne.IEEE International Conference on Parallel Processing1980).An optimization of alpha-beta search. SIGART Bulletin, Issue 721981).Analysis of Speedup in Distributed Algorithms. Ph.D. Thesis, University of Wisconsin-Madison, pdf,Calphabetaat page 1671982).Parallel Search of Strongly Ordered Game Trees.ACM Computing Surveys, Vol. 14, No. 4, pdf1983).A Comparison of Minimax Tree Search Algorithms. Artificial Intelligence, Vol. 20, No. 4, pp. 347-367. ISSN 0004-3702.1983).Relative Efficiency of Alpha-beta Implementations. Procs. 8th Int. Joint Conf. on Art. Intell., pp. 763-766. Kaufman, Los Altos, pdf1983).An Improvement to the Scout Tree-Search Algorithm.ICCA Journal, Vol. 6, No. 4, pdf## 1985 ...

1985).An empirical comparison of pruning strategies in game trees. IEEE Transactions on Systems, Man, and Cybernetics, Vol. 15, No. 31986).A Review of Game-Tree Pruning.ICCA Journal, Vol. 9, No. 1, pdf1987).A Quantitative Analysis of Minimal Window Search.IJCAI-87, pdf## 2000 ...

2003).Enhanced forward pruning.Accepted for publication. pdf (with PVS modifications)## Forum Posts

## 1995 ...

## 2000 ...

Re: Fruit - Question for Fabien by Fabien Letouzey, CCC, March 11, 2004

## 2005 ...

## 2010 ...

## 2015 ...

## External Links

## Video Tutorial

## References

1982).Parallel Search of Strongly Ordered Game Trees.ACM Computing Surveys, Vol. 14, No. 4, pdf reprint1980).Parallel Alpha-Beta Search on Arachne.IEEE International Conference on Parallel Processing, pp. 235-243.1981).Analysis of Speedup in Distributed Algorithms. Ph.D. Thesis, University of Wisconsin-Madison, pdf,Calphabetaat page 1671980).Asymptotic Properties of Minimax Trees and Game-Searching Procedures.Artificial Intelligence, Vol. 14, No. 21980).Scout: A Simple Game-Searching Algorithm with Proven Optimal Properties. Proceedings of the First Annual National Conference on Artificial Intelligence. Stanford. pdf1980).An optimization of alpha-beta search, SIGART Bulletin, Issue 721983).An Improvement to the Scout Tree-Search Algorithm.ICCA Journal, Vol. 6, No. 4, pdf2002).Selective Depth-First Game-Tree Search.Ph.D. thesis, University of Alberta2003).Enhanced forward pruning.Accepted for publication. pdf1998).Ph.D. thesis: Memory versus Search in Games1986).A Review of Game-Tree Pruning.ICCA Journal, Vol. 9, No. 1, pdf1983).An Improvement to the Scout Tree-Search Algorithm.ICCA Journal, Vol. 6, No. 4, pdf## What links here?

Up one level