Michael (Mike) Gherrity,
an American computer scientist and AI-researcher from the University of California, San Diego. He made his Ph.D. in 1993, A Game Learning Machine, elaborating on SAL (Search and Learn) [1], his General Game Playing program. While applying a move generator, and losing if own king is captured as sole domain specific chess knowledge, it was the first chess program used Temporal Difference Learning[2]. In a match of 4200 games against GNU Chess (One second per move), it started to play random moves within its two ply search plus Consistency Search, a generalized Quiescence Search[3], but learned to play reasonable, but still weak chess. It archived eight draws, apparently due to a repetition detection bug in GNU Chess [4].
^Marco Block, Maro Bader, Ernesto Tapia, Marte Ramírez, Ketill Gunnarsson, Erik Cuevas, Daniel Zaldivar, Raúl Rojas (2008). Using Reinforcement Learning in Chess Engines. CONCIBE SCIENCE 2008, Research in Computing Science: Special Issue in Electronics and Biomedical Engineering, Computer Science and Informatics, ISSN:1870-4069, Vol. 35, pp. 31-40, Guadalajara, Mexico, pdf, 1.1 Related Work
Table of Contents
an American computer scientist and AI-researcher from the University of California, San Diego. He made his Ph.D. in 1993, A Game Learning Machine, elaborating on SAL (Search and Learn) [1], his General Game Playing program. While applying a move generator, and losing if own king is captured as sole domain specific chess knowledge, it was the first chess program used Temporal Difference Learning [2]. In a match of 4200 games against GNU Chess (One second per move), it started to play random moves within its two ply search plus Consistency Search, a generalized Quiescence Search [3], but learned to play reasonable, but still weak chess. It archived eight draws, apparently due to a repetition detection bug in GNU Chess [4].
Selected Publications
Forum Posts
External Links
References
What links here?
Up one level