The basic theory on which one chess program can be constructed is that there exists a general characteristic of the game of chess, namely the concept of entropy. This concept has been employed in physics for a long time. In the case of a gas, it is the logarithm of the number of those microscopic states compatible with the macroscopic parameters of the gas.

What does this mean in terms of chess? A common characteristic of every piece is that it could move to certain squares, including by capture. In any given position, therefore, the pieces by the rules of the game possess certain states, only one of which will be realized on the next move. The difference of the logarithm of the numbers of such states for Black and White respectively is the "entropy of the position". The task of the computer is then to increase this value for its own benefit.

...

Entropy is a principle of statistical physics and therefore is only applicable in statistical contexts. The number of microstates of a confined gas is very large and therefore the statistical approach is valid. In chess, however, the number of pieces, a macroscopic parameter, is very small and therefore in this context the "value" of a position cannot be an exact function of entropy. For example, it is possible to checkmate with a total force of a single pawn despite the fact that the opponent has many pieces and various positions available.

Marsland

Tony Marsland mentioned Papa and other programs participating the WCCC 1974 in his handwritten notes on the Hayes and Levy book ^{[4]} :

Freedom and Papa both use mobility as their primary term in their evaluation functions. As with Wita, both use the ratio of computer's moves / opponent moves. Papa and Wita also multiply by the ratio of the squares controlled and Papa goes one step further and takes the logarithm of this product to form the "entropy" of the position. The true merit of this entropy over the product ratio was not made clear, but it does ensure that in extreme situations the evaluation remains more closely bounded.

The Merit

The merit of this entropy over the product ratio probably is that it makes a product a sum and a quotient a difference, resulting in Negamax compatible symmetric values around zero in relation to side to move^{[5]}.

Home * Engines * PapaPapa,an early computer chess program written by George Rajna and B. Almasi, in the 70s affiliated with the Hungarian Academy of Sciences. Papa applies the concept of entropy based on own and opponent mobility. It participated at the First World Computer Chess Championship, 1974 in Stockholm, unfortunately loosing all its three games due to tactical blunders. Based on preliminary results or expectations, Papa was seeded third, ahead of later winner Kaissa seeded fifth.

^{[1]}## Table of Contents

## Entropy

## Rajna

In 2012, George Rajna has re-published a short paper on entropy in chess^{[2]}, which already appeared inThe World Computer-Chess Championshipby Hayes and Levy^{[3]}:The basic theory on which one chess program can be constructed is that there exists a general characteristic of the game of chess, namely the concept of entropy. This concept has been employed in physics for a long time. In the case of a gas, it is the logarithm of the number of those microscopic states compatible with the macroscopic parameters of the gas.

What does this mean in terms of chess? A common characteristic of every piece is that it could move to certain squares, including by capture. In any given position, therefore, the pieces by the rules of the game possess certain states, only one of which will be realized on the next move. The difference of the logarithm of the numbers of such states for Black and White respectively is the "entropy of the position". The task of the computer is then to increase this value for its own benefit.

Entropy is a principle of statistical physics and therefore is only applicable in statistical contexts. The number of microstates of a confined gas is very large and therefore the statistical approach is valid. In chess, however, the number of pieces, a macroscopic parameter, is very small and therefore in this context the "value" of a position cannot be an exact function of entropy. For example, it is possible to checkmate with a total force of a single pawn despite the fact that the opponent has many pieces and various positions available.## Marsland

Tony Marsland mentioned Papa and other programs participating the WCCC 1974 in his handwritten notes on the Hayes and Levy book^{[4]}:Freedom and Papa both use mobility as their primary term in their evaluation functions. As with Wita, both use the ratio of computer's moves / opponent moves. Papa and Wita also multiply by the ratio of the squares controlled and Papa goes one step further and takes the logarithm of this product to form the "entropy" of the position. The true merit of this entropy over the product ratio was not made clear, but it does ensure that in extreme situations the evaluation remains more closely bounded.## The Merit

The merit of this entropy over the product ratio probably is that it makes a product a sum and a quotient a difference, resulting in Negamax compatible symmetric values around zero in relation to side to move^{[5]}.## Binary Logarithm

The definition of the amount of self-information and information entropy involves the binary logarithm. On x86 or x86-64, Log2 (lb) may be applied using MMX or SSE2 instructions^{[6]}^{[7]}, with vectors of two or four floats.^{[8]}## Selected Games

WCCC 1974, round 3, Papa - Ribbit^{[9]}:## See also

## External Links

## Papa

## Entropy

## References

1976, 2012).Information – Entropy theory of Artificial Intelligence. pdf1976).The world computer chess championship, Stockholm 1974. University Press (Edinburgh) ISBN 0852242859## What links here?

Up one Level