Inside a chess program, knowledge is manifested either as procedural or as declarative knowledge. The declarative a priori knowledge about the rules of chess is immanent inside the move generator in conjunction with check detection. Further declarative knowledge is coded as rules of thumb of the evaluation function, as well as persistent perfect knowledge from retrograde analysis, or as empirical knowledge mostly from human experience, to retrieve moves from an hand crafted opening book. Procedural knowledge is applied by Learning, as well to backup declarative knowledge by Search. However, the Search versus Knowledge trade-off in computer chess and games refers heuristic or perfect knowledge.
Curves A and B represent a fixed performance level, say 2000 for B and 2200 for A. The curves indicate that there are many ways to archive that level of performance, either by little knowledge and lot of search, or vice versa. If a program has archived point P on curve B, it may improve between search only (point S) or knowledge only (point K).
The difficulty lies in quantifying the knowledge axis. Perfect knowledge assumes an oracle, which for most games we do not have. However, we can approximate an oracle by using a high-quality, game playing program that performs deep searches. Although not perfect, it is the best approximation available. Using this, how can we measure the quality of knowledge in the program?
A heuristic evaluation function, as judged by an oracle, can be viewed as a combination of two things: oracle knowledge and noise. The oracle knowledge is beneficial and improves the program' s play. The noise, on the other hand, represents the inaccuracies in the program' s knowledge. It can be introduced by several things, including knowledge that is missing, over- or under-valued, and/or irrelevant. As the noise level increase, the beneficial contribution of the knowledge is overshadowed.
By definition, an oracle has no noise. We can measure the quality of the heuristic evaluation in a program by the amount of noise that is added into it. To measure this, we add a random number to each leaf node evaluation ...
HiTech versus LoTech
At the Advances in Computer Chess 5 conference 1987, Hans Berliner et al. introduced an interesting experiment. HiTech with a sophisticated evaluation competed versus LoTech, almost the same program but a rudimentary evaluation, at different search depths[4]. According to Peter Kouwenhoven[5], Berliner's conclusion was that more was to be won by increasing HiTech's knowledge at constant speed rather than further increasing its speed while keeping knowledge constant. The abstract from the extended 1990 paper [6]:
Chess programs can differ in depth of search or in the evaluation function applied to leaf nodes or both. Over the past 10 years, the notion that the principal way to strengthen a chess program is to improve its depth of search has held sway. Improving depth of search undoubtedly does improve a program's strength. However, projections of potential gain have time and again been found to overestimate the actual gain.
We examine the notion that it is possible to project the playing strength of chess programs by having different versions of the same program (differing only in depth of search) play each other. Our data indicates that once a depth of “tactical sufficiency” is reached, a knowledgeable program can beat a significantly less knowledgeable one almost all of the time when both are searching to the same depth. This suggests that once a certain knowledge gap has been opened up, it cannot be overcome by small increments in searching depth. The conclusion from this work is that extending the depth of search without increasing the present level of knowledge will not in any foreseeable time lead to World Championship level chess. The approach of increasing knowledge has been taken in the HiTech chess machine.
Ed Schröder's Conclusion
Ed Schröder concluded HiTech's extra knowledge was just worth one ply in the computer-computer area [7][8], and further elaborated on tactical evaluation knowledge helpful in early Rebel searching 5-7 plies versus Rebel searching 13-15 plies on 2001 hardware [9][10]:
Some specific chess knowledge through the years become out-dated due to the speed of nowadays computers. An example: In the early days of computer chess, say the period 1985-1989 I as hardware had a 6502 running at 5 Mhz. Rebel at that time could only search 5-7 plies on tournament time control. Such a low depth guarantees you one thing: horizon effects all over, thus losing the game.
To escape from the horizon effect all kind of tricks were invented, chess knowledge about dangerous pins, knight forks, double attacks, overloading of pieces and reward those aspects in eval. Complicated and processor time consuming software it was (15-20% less performance) but it did the trick escaping from the horizon effect in a reasonable way.
Today we run chess program on 1500 Mhz machines and instead of the 5-7 plies Rebel now gets 13-15 plies in the middle game and the horizon effect which was a major problem at 5 Mhz slowly was fading away.
So I wondered, what if I throw that complicated "anti-horizon" code out of Rebel, is it still needed? So I tried and found out that Rebel played as good with the "anti-horizon" code as without the code. In other words, the net gain was a "free" speed gain of 15-20%, thus an improvement.
One aspect of chess programming is that your program is in a constant state of change due to the state of art of nowadays available hardware. I am sure a Rebel at 10 Ghz several parts of Rebel need a face-lift to get the maximum out of the new speed monster.
I am not saying search is the only reason but that search is more significant in the Elo jump than other factors. Thanks for the interesting history which in general I agree with but I should add for clarification...
In 1989 I was exposed to the null move in HIARCS' first official computer tournament at the Olympiad in London. John Hamlen had written his program Woodpusher as part of his M.Sc project investigating the null move exclamation and although Woodpusher did not do well in that tournament I had the good fortune to discuss null move with John and reading his project which interested me very much. John and I had many discussions on computer chess over the following months.
That Olympiad probably had an impact on you too and not only because Rebel won Gold above Mephisto X and Fidelity X for the first time but because there was a program there running quite fast (at that time in history) called E6P (so named because it could reach 6 ply full width!) running on an Acorn ARMbased machine. Jan was operating Rebel at the tournament as you know and I noticed Jan take a keen interest in this new machine.
Frans was using null move in Fritz 2 in Madrid 1992 (I am not sure about Fritz 1). I am sure you remember that tournament fondly. I remember at Madrid having a discussion with a few programmers including Richard about what on earth was Fritz 2 doing to reach the depths it was attaining, this seed of interest made me investigate search enhancements carefully after Madrid and it was then that I began re-investigating the null move. It was not long before HIARCS began using null moves too and by Christmas 1992 I had a working implementation which gave a big jump over Hiarcs 1 which had only been released shortly before. I had much fun playing my Hiarcs 1.n against Mephisto Risc 1 in late 1992, early 1993 and that sort of closes the circle on the E6P story. Even so, the null move idea still evolved and my implementation today is much better than it was initially and so it can be for other ideas. We can all profit from these techniques but some gain more than others if they find new ways to exploit it.
Rebel 7/8/9 and Hiarcs 5/6/7 battled hard in those years and my big jump to the top of the rating lists happened partly because of a search change to Hiarcs 5/6 which led at one stage to a 71 Elo lead on the SSDF. I recall Rebel and Hiarcs exchanging the lead on the SSDF list a number of times in those years. So we agree the impact search has made on computer chess progress is clear. Now we diverge on what might be the reason for new progress in the field.
Clearly we have good search enhancements on cut nodes and all nodes, but that does not mean they cannot be improved and enhanced or in fact that another technique might prove to be superior or complementary. Meanwhile there can be no doubt that progress is made positionally irrespective of search but I do not believe it can be the reason for a breakthrough jump in chess strength but rather a steady climb.
But wait!
Maybe there is a middle way, a third avenue of improvement which sort of falls between the two pillars of search and evaluation and that is search intelligence. Something both of us have practiced in our programs for years but maybe not properly exploited. The ability of the eval to have a more significant impact on the search than traditionally has been the case. I think this area has been relatively unexplored and offers interesting potential.
Perfect Knowledge usually incorporates analysis or stored results from exhaustive search, which requires no further search at interior nodes except the root. An oracle might be considered as "perfect" evaluation.
Donald Michie (1976). An Advice-Taking System for Computer Chess. Computer Bulletin, Ser. 2, Vol. 10, pp. 12-14. ISSN 0010-4531.
Jack Good (1977). Dynamic Probability, Computer Chess, and the Measurement of Knowledge. Machine Intelligence, Vol. 8 (eds. E. Elcock and Donald Michie), pp. 139-150. Ellis Horwood, Chichester.
Donald Michie, Ivan Bratko (1978). Advice Table Representations of Chess End-Game Knowledge. Proceedings 3rd AISB/GI Conference, pp. 194-200.
David Wilkins (1979). Using Patterns and Plans to Solve Problems and Control Search. Ph.D. thesis, Computer Science Dept, Stanford University, AI Lab Memo AIM-329
Hans Berliner (1982). Search vs. knowledge: an analysis from the domain of games. Technical Report Department of Computer Science, Carnegie Mellon University
Alen Shapiro (1987). Structured Induction in Expert Systems. Turing Institute Press in association with Addison-Wesley Publishing Company, Workingham, UK. ISBN 0-201-178133. amazon[14]
Christian Posthoff, Michael Schlosser, Jens Zeidler (1993). Search vs. Knowledge? - Search and Knowledge! In: Proc. 3rd KADS Meeting, Munich, March 8-9 1993, Siemens AG, Corporate Research and Development, 305-326, 1993.
Steven Walczak, Douglas D. Dankel II (1993). Acquiring Tactical and Strategic Knowledge with a Generalized Method for Chunking of Game Pieces. International Journal of Intelligent Systems 8 (2), 249-270.
Johannes Fürnkranz (1997). Knowledge Discovery in Chess Databases: A Research Proposal. Technical Report OEFAI-TR-97-33, Austrian Research Institute for Artificial Intelligence, zipped ps, pdf[15]
Inside a chess program, knowledge is manifested either as procedural or as declarative knowledge. The declarative a priori knowledge about the rules of chess is immanent inside the move generator in conjunction with check detection. Further declarative knowledge is coded as rules of thumb of the evaluation function, as well as persistent perfect knowledge from retrograde analysis, or as empirical knowledge mostly from human experience, to retrieve moves from an hand crafted opening book. Procedural knowledge is applied by Learning, as well to backup declarative knowledge by Search. However, the Search versus Knowledge trade-off in computer chess and games refers heuristic or perfect knowledge.
Table of Contents
Search versus Knowledge
Curves A and B represent a fixed performance level, say 2000 for B and 2200 for A. The curves indicate that there are many ways to archive that level of performance, either by little knowledge and lot of search, or vice versa. If a program has archived point P on curve B, it may improve between search only (point S) or knowledge only (point K).
In Practice
Andreas Junghanns and Jonathan Schaeffer in Search versus knowledge in game-playing programs revisited [3]:A heuristic evaluation function, as judged by an oracle, can be viewed as a combination of two things: oracle knowledge and noise. The oracle knowledge is beneficial and improves the program' s play. The noise, on the other hand, represents the inaccuracies in the program' s knowledge. It can be introduced by several things, including knowledge that is missing, over- or under-valued, and/or irrelevant. As the noise level increase, the beneficial contribution of the knowledge is overshadowed.
By definition, an oracle has no noise. We can measure the quality of the heuristic evaluation in a program by the amount of noise that is added into it. To measure this, we add a random number to each leaf node evaluation ...
HiTech versus LoTech
At the Advances in Computer Chess 5 conference 1987, Hans Berliner et al. introduced an interesting experiment. HiTech with a sophisticated evaluation competed versus LoTech, almost the same program but a rudimentary evaluation, at different search depths [4]. According to Peter Kouwenhoven [5], Berliner's conclusion was that more was to be won by increasing HiTech's knowledge at constant speed rather than further increasing its speed while keeping knowledge constant. The abstract from the extended 1990 paper [6]:We examine the notion that it is possible to project the playing strength of chess programs by having different versions of the same program (differing only in depth of search) play each other. Our data indicates that once a depth of “tactical sufficiency” is reached, a knowledgeable program can beat a significantly less knowledgeable one almost all of the time when both are searching to the same depth. This suggests that once a certain knowledge gap has been opened up, it cannot be overcome by small increments in searching depth. The conclusion from this work is that extending the depth of search without increasing the present level of knowledge will not in any foreseeable time lead to World Championship level chess. The approach of increasing knowledge has been taken in the HiTech chess machine.
Ed Schröder's Conclusion
Ed Schröder concluded HiTech's extra knowledge was just worth one ply in the computer-computer area [7][8], and further elaborated on tactical evaluation knowledge helpful in early Rebel searching 5-7 plies versus Rebel searching 13-15 plies on 2001 hardware [9] [10]:To escape from the horizon effect all kind of tricks were invented, chess knowledge about dangerous pins, knight forks, double attacks, overloading of pieces and reward those aspects in eval. Complicated and processor time consuming software it was (15-20% less performance) but it did the trick escaping from the horizon effect in a reasonable way.
Today we run chess program on 1500 Mhz machines and instead of the 5-7 plies Rebel now gets 13-15 plies in the middle game and the horizon effect which was a major problem at 5 Mhz slowly was fading away.
So I wondered, what if I throw that complicated "anti-horizon" code out of Rebel, is it still needed? So I tried and found out that Rebel played as good with the "anti-horizon" code as without the code. In other words, the net gain was a "free" speed gain of 15-20%, thus an improvement.
One aspect of chess programming is that your program is in a constant state of change due to the state of art of nowadays available hardware. I am sure a Rebel at 10 Ghz several parts of Rebel need a face-lift to get the maximum out of the new speed monster.
Search versus Evaluation
Mark Uniacke in a reply to Ed Schröder, on Search or Evaluation, mentioning the 1st Computer Olympiad, Null Move Pruning, Hiarcs, Woodpusher, E6P, Rebel, Fritz ... [11]:In 1989 I was exposed to the null move in HIARCS' first official computer tournament at the Olympiad in London. John Hamlen had written his program Woodpusher as part of his M.Sc project investigating the null move exclamation and although Woodpusher did not do well in that tournament I had the good fortune to discuss null move with John and reading his project which interested me very much. John and I had many discussions on computer chess over the following months.
That Olympiad probably had an impact on you too and not only because Rebel won Gold above Mephisto X and Fidelity X for the first time but because there was a program there running quite fast (at that time in history) called E6P (so named because it could reach 6 ply full width!) running on an Acorn ARM based machine. Jan was operating Rebel at the tournament as you know and I noticed Jan take a keen interest in this new machine.
Frans was using null move in Fritz 2 in Madrid 1992 (I am not sure about Fritz 1). I am sure you remember that tournament fondly. I remember at Madrid having a discussion with a few programmers including Richard about what on earth was Fritz 2 doing to reach the depths it was attaining, this seed of interest made me investigate search enhancements carefully after Madrid and it was then that I began re-investigating the null move. It was not long before HIARCS began using null moves too and by Christmas 1992 I had a working implementation which gave a big jump over Hiarcs 1 which had only been released shortly before. I had much fun playing my Hiarcs 1.n against Mephisto Risc 1 in late 1992, early 1993 and that sort of closes the circle on the E6P story. Even so, the null move idea still evolved and my implementation today is much better than it was initially and so it can be for other ideas. We can all profit from these techniques but some gain more than others if they find new ways to exploit it.
Rebel 7/8/9 and Hiarcs 5/6/7 battled hard in those years and my big jump to the top of the rating lists happened partly because of a search change to Hiarcs 5/6 which led at one stage to a 71 Elo lead on the SSDF. I recall Rebel and Hiarcs exchanging the lead on the SSDF list a number of times in those years. So we agree the impact search has made on computer chess progress is clear. Now we diverge on what might be the reason for new progress in the field.
Clearly we have good search enhancements on cut nodes and all nodes, but that does not mean they cannot be improved and enhanced or in fact that another technique might prove to be superior or complementary. Meanwhile there can be no doubt that progress is made positionally irrespective of search but I do not believe it can be the reason for a breakthrough jump in chess strength but rather a steady climb.
But wait!
Maybe there is a middle way, a third avenue of improvement which sort of falls between the two pillars of search and evaluation and that is search intelligence. Something both of us have practiced in our programs for years but maybe not properly exploited. The ability of the eval to have a more significant impact on the search than traditionally has been the case. I think this area has been relatively unexplored and offers interesting potential.
Knowing
Declarative Knowledge
A Priori
Heuristic Knowledge
Empirical Knowledge
Perfect Knowledge
Perfect Knowledge usually incorporates analysis or stored results from exhaustive search, which requires no further search at interior nodes except the root. An oracle might be considered as "perfect" evaluation.Interior Node Recognizer
Endgame Databases
as probed inside a Recognizer framework, if a certain material constellation is detected.Procedural Knowledge
See also
Publications
1970 ...
1975 ...
1980 ...
1985 ...
Revised as Hans Berliner, Carl Ebeling (1990). Hitech. Computers, Chess, and Cognition
1990 ...
1995 ...
2000 ...
2005 ...
2010 ...
2015 ...
Forum Posts
1996 ...
2000 ...
2005 ...
Re: Search or Evaluation? by Mark Uniacke, Hiarcs Forum, October 14, 2007
2010 ...
2015 ...
Re: Chessprogams with the most chessknowing by Alvaro Cardoso, CCC, February 18, 2017
Re: Chessprogams with the most chessknowing by Mark Lefler, CCC, February 18, 2017 » Komodo
Re: Chessprogams with the most chessknowing by Marco Costalba, CCC, February 19, 2017 » Stockfish
External Links
Knowledge
Related Topics
Types of Knowledge
Musicvideo
John McLaughlin, Billy Cobham, Rick Laird, Jan Hammer, Jerry Goodman
References
What links here?
Up one Level