Learning,
the process of acquiring new knowledge which involves synthesizing different types of information. Machine learning as aspect of computer chess programming deals with algorithms that allow the program to change its behavior based on data, which for instance occurs during game playing against a variety of opponents considering the final outcome and/or the game record for instance as history score chart indexed by ply. Related to Machine learning is evolutionary computation and its sub-areas of genetic algorithms, and genetic programming, that mimics the process of natural evolution, as further mentioned in automated tuning. The process of learning often implies understanding, perception or reasoning. So called Rote learning avoids understanding and focuses on memorization. Inductive learning takes examples and generalizes rather than starting with existing knowledge. Deductive learning takes abstract concepts to make sense of examples [1].
Learning inside a chess program may address several disjoint issues. A persistent hash table remembers "important" positions from earlier games inside the search with its exact score[3]. Worse positions may be avoided in advance. Learning opening book moves, that is appending successful novelties or modify the probability of already stored moves from the book based on the outcome of a game [4]. Another application is learning evaluation weights of various features, f. i. piece-[5] or piece-square[6] values or mobility. Programs may also learn to control search [7] or time usage[8].
Supervised learning is learning from examples provided by a knowledgable external supervisor. In machine learning, supervised learning is a technique for deducing a function from training data. The training data consist of pairs of input objects and desired outputs, f.i. in computer chess a sequence of positions associated with the outcome of a game [9] .
Unsupervised Learning
Unsupervised machine learning seems much harder: the goal is to have the computer learn how to do something that we don't tell it how to do. The learner is given only unlabeled examples, f. i. a sequence of positions of a running game but the final result (still) unknown. A form of reinforcement learning can be used for unsupervised learning, where an agent bases its actions on the previous rewards and punishments without necessarily even learning any information about the exact ways that its actions affect the world. Clustering is another method of unsupervised learning.
Reinforcement learning is defined not by characterizing learning methods, but by characterizing a learning problem. Reinforcement learning is learning what to do - how to map situations to actions - so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them. The reinforcement learning problem is deeply indebted to the idea of Markov decision processes (MDPs) from the field of optimal control.
Herbert Simon, Edward Feigenbaum (1964). An Information-processing Theory of Some Effects of Similarity, Familiarization, and Meaningfulness in Verbal Learning. Journal of Verbal Learning and Verbal Behavior, Vol. 3, No. 5, pdf
Donald Michie (1966). Game Playing and Game Learning Automata. Advances in Programming and Non-Numerical Computation, Leslie Fox (ed.), pp. 183-200. Oxford, Pergamon. » Includes Appendix: Rules of SOMAC by John Maynard Smith, introduces Expectiminimax tree[14]
A. Harry Klopf (1972). Brain Function and Adaptive Systems - A Heterostatic Theory. Air Force Cambridge Research Laboratories, Special Reports, No. 133, pdf
Jacques Pitrat (1976). A Program to Learn to Play Chess. Pattern Recognition and Artificial Intelligence, pp. 399-419. Academic Press Ltd. London, UK. ISBN 0-12-170950-7.
Jacques Pitrat (1976). Realization of a Program Learning to Find Combinations at Chess. Computer Oriented Learning Processes (ed. J. Simon). Noordhoff, Groningen, The Netherlands.
Pericles Negri (1977). Inductive Learning in a Hierarchical Model for Representing Knowledge in Chess End Games. pdf
Boris Stilman (1977). The Computer Learns. in 1976 US Computer Chess Championship, by David Levy, Computer Science Press, Woodland Hills, CA, pp. 83-90
Richard Sutton (1978). Single channel theory: A neuronal theory of learning. Brain Theory Newsletter 3, No. 3/4, pp. 72-75.
Ross Quinlan (1979). Discovering Rules by Induction from Large Collections of Examples. Expert Systems in the Micro-electronic Age, pp. 168-201. Edinburgh University Press (Introducing ID3)
A. Harry Klopf (1982). The Hedonistic Neuron: A Theory of Memory, Learning, and Intelligence. Hemisphere Publishing Corporation, University of Michigan
Ross Quinlan (1983). Learning efficient classification procedures and their application to chess end games. In Machine Learning: An Artificial Intelligence Approach, pages 463–482. Tioga, Palo Alto
Alen Shapiro (1983). The Role of Structured Induction in Expert Systems. University of Edinburgh, Machine Intelligence Research Unit (Ph.D. thesis)
Hans Berliner (1985). Goals, Plans, and Mechanisms: Non-symbolically in an Evaluation Surface. Presentation at Evolution, Games, and Learning, Center for Nonlinear Studies, Los Alamos National Laboratory, May 21.
Jens Christensen, Richard Korf (1986). A Unified Theory of Heuristic Evaluation functions and Its Applications to Learning. Proceedings of the AAAI-86, pp. 148-152, pdf.
Ivan Bratko, Igor Kononenko (1986). Learning Rules from Incomplete and Noisy Data. Proceedings Unicom Seminar on the Scope of Artificial Intelligence in Statistics. Technical Press
Alen Shapiro (1987). Structured Induction in Expert Systems. Turing Institute Press in association with Addison-Wesley Publishing Company, Workingham, UK
Bruce Abramson (1988). Learning Expected-Outcome Evaluators in Chess. Proceedings of the 1988 AAAI Spring Symposium Series: Computer Game Playing, 26-28.
Bruce Abramson (1989). On Learning and Testing Evaluation Functions. Proceedings of the Sixth Israeli Conference on Artificial Intelligence, 1989, 7-16.
Richard Sutton, Andrew Barto (1990). Time Derivative Models of Pavlovian Reinforcement. Learning and Computational Neuroscience: Foundations of Adaptive Networks: 497-537.
Bruce Abramson (1990). On Learning and Testing Evaluation Functions. Journal of Experimental and Theoretical Artificial Intelligence 2: 241-251.
Steven Walczak (1991). Predicting Actions from Induction on Past Performance. Proceedings of the 8th International Workshop on Machine Learning , pp. 275-279. Morgan Kaufmann
Michael Bain (1992). Learning optimal chess strategies. Proc. Intl. Workshop on Inductive Logic Programming (ed. Stephen Muggleton), Institute for New Generation Computer Technology, Tokyo, Japan.
Shaul Markovitch, Yaron Sella (1993). Learning of Resource Allocation Strategies for Game Playing, The proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambery, France. pdf
Michael Bain, Stephen Muggleton (1994). Learning Optimal Chess Strategies. Machine Intelligence 13 (eds. K. Furukawa and Donald Michie), pp. 291-309. Oxford University Press, Oxford, UK. ISBN 0198538502.
Stuart Russell (1996). Machine Learning. Chapter 4 of M. A. Boden (Ed.), Artificial Intelligence, Academic Press. Part of the Handbook of Perception and Cognition, ps
Kieran Greer, Piyush Ojha, David A. Bell (1997). Learning Search Heuristics from Examples: A Study in Computer Chess, Seventh Conference of the Spanish Association for Artificial Intelligence, CAEPIA’97, November, pp. 695-704.
Ronald Parr, Stuart Russell (1997). Reinforcement Learning with Hierarchies of Machines. In Advances in Neural Information Processing Systems 10, MIT Press, zipped ps
Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). Knightcap: A chess program that learns by combining td(λ) with game-tree search, Proceedings of the 15th International Conference on Machine Learning, pdf via citeseerX
Csaba Szepesvári (1998). Reinforcement Learning: Theory and Practice. Proceedings of the 2nd Slovak Conference on Artificial Neural Networks, zipped ps
Ryszard Michalski (1998). Learnable Evolution: Combining Symbolic and Evolutionary Learning. Proceedings of the Fourth International Workshop on Multistrategy Learning (MSL'98)
Vassilis Papavassiliou, Stuart Russell (1999). Convergence of reinforcement learning with general function approximators. In Proc. IJCAI-99, Stockholm, ps
Andrew Ng, Stuart Russell (2000). Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, California: Morgan Kaufmann, pdf
Yngvi Björnsson, Tony Marsland (2002). Learning Control of Search Extensions. Proceedings of the 6th Joint Conference on Information Sciences (JCIS 2002), pp. 446-449. pdf
Michael Buro (2002). Improving Mini-max Search by Supervised Learning. Artificial Intelligence, Vol. 134, No. 1, pp. 85-99. ISSN 0004-3702. pdf
Mark Winands, Levente Kocsis, Jos Uiterwijk, Jaap van den Herik (2002). Temporal difference learning and the Neural MoveMap heuristic in the game of Lines of Action. In Mehdi, Q,., Gouch, N., and Cavazza, M., editors, GAME-ON 2002 3rd International Conference on Intelligent Games and Simulation, pages 99-103. SCS Europe Bvba. pdf
Judea Pearl, Stuart Russell (2003). Bayesian Networks. In Michael A. Arbib, Ed., The Handbook of Brain Theory and Neural Networks, 2nd edition, MIT Press, pdf
Yngvi Björnsson, Vignir Hafsteinsson, Ársæll Jóhannsson, Einar Jónsson (2004). Efficient Use of Reinforcement Learning in a Computer Game. In Computer Games: Artificial Intellignece, Design and Education (CGAIDE'04), pp. 379–383, 2004. pdf
Dave Gomboc (2004). Tuning Evaluation Functions by Maximizing Concordance Master of Science Thesis, pdf
Dave Gomboc, Michael Buro, Tony Marsland (2005). Tuning evaluation functions by maximizing concordance Theoretical Computer Science, Volume 349, Issue 2, pp. 202-229, pdf
Sverrir Sigmundarson, Yngvi Björnsson. (2006) Value Back-Propagation vs. Backtracking in Real-Time Search. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Workshop on Learning For Search, pp. 136–141, AAAI Press, Boston, Massachusetts, USA, July 2006. pdf
Johannes Fürnkranz (2007). Recent advances in machine learning and game playing. ÖGAI Journal, Vol. 26, No. 2, Computer Game Playing, pdf
2008
Marco Block, Maro Bader, Ernesto Tapia, Marte Ramírez, Ketill Gunnarsson, Erik Cuevas, Daniel Zaldivar, Raúl Rojas (2008). Using Reinforcement Learning in Chess Engines. CONCIBE SCIENCE 2008, Research in Computing Science: Special Issue in Electronics and Biomedical Engineering, Computer Science and Informatics, ISSN:1870-4069, Vol. 35, pp. 31-40, Guadalajara, Mexico, pdf
the process of acquiring new knowledge which involves synthesizing different types of information. Machine learning as aspect of computer chess programming deals with algorithms that allow the program to change its behavior based on data, which for instance occurs during game playing against a variety of opponents considering the final outcome and/or the game record for instance as history score chart indexed by ply. Related to Machine learning is evolutionary computation and its sub-areas of genetic algorithms, and genetic programming, that mimics the process of natural evolution, as further mentioned in automated tuning. The process of learning often implies understanding, perception or reasoning. So called Rote learning avoids understanding and focuses on memorization. Inductive learning takes examples and generalizes rather than starting with existing knowledge. Deductive learning takes abstract concepts to make sense of examples [1].
Table of Contents
Learning inside a Chess Program
Learning inside a chess program may address several disjoint issues. A persistent hash table remembers "important" positions from earlier games inside the search with its exact score [3]. Worse positions may be avoided in advance. Learning opening book moves, that is appending successful novelties or modify the probability of already stored moves from the book based on the outcome of a game [4]. Another application is learning evaluation weights of various features, f. i. piece- [5] or piece-square [6] values or mobility. Programs may also learn to control search [7] or time usage [8].Learning Paradigms
There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. Usually any given type of neural network architecture can be employed in any of those tasks.Supervised Learning
Supervised learning is learning from examples provided by a knowledgable external supervisor. In machine learning, supervised learning is a technique for deducing a function from training data. The training data consist of pairs of input objects and desired outputs, f.i. in computer chess a sequence of positions associated with the outcome of a game [9] .Unsupervised Learning
Unsupervised machine learning seems much harder: the goal is to have the computer learn how to do something that we don't tell it how to do. The learner is given only unlabeled examples, f. i. a sequence of positions of a running game but the final result (still) unknown. A form of reinforcement learning can be used for unsupervised learning, where an agent bases its actions on the previous rewards and punishments without necessarily even learning any information about the exact ways that its actions affect the world. Clustering is another method of unsupervised learning.Reinforcement Learning
see main page Reinforcement LearningReinforcement learning is defined not by characterizing learning methods, but by characterizing a learning problem. Reinforcement learning is learning what to do - how to map situations to actions - so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them. The reinforcement learning problem is deeply indebted to the idea of Markov decision processes (MDPs) from the field of optimal control.
Learning Topics
Programs
See also
Selected Publications
[10]1940 ...
1950 ...
Claude Shannon, John McCarthy (eds.) (1956). Automata Studies. Annals of Mathematics Studies, No. 34
Alan Turing, Jack Copeland (editor) (2004). The Essential Turing, Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma. Oxford University Press, amazon, google books
1955 ...
Claude Shannon, John McCarthy (eds.) (1956). Automata Studies. Annals of Mathematics Studies, No. 34, pdf
1960 ...
1965 ...
1970 ...
1975 ...
1980 ...
1985 ...
- Tony Marsland (1985). Evaluation-Function Factors. ICCA Journal, Vol. 8, No. 2, pdf
- Albrecht Heeffer (1985). Validating Concepts from Automated Acquisition Systems. IJCAI 85, pdf
- Hans Berliner (1985). Goals, Plans, and Mechanisms: Non-symbolically in an Evaluation Surface. Presentation at Evolution, Games, and Learning, Center for Nonlinear Studies, Los Alamos National Laboratory, May 21.
- Ryszard Michalski, Jaime Carbonell, Tom Mitchell (1985). Machine Learning: An Artificial Intelligence Approach. Morgan Kaufmann, ISBN 0-934613-09-5. google books
1986- Steven Skiena (1986). An Overview of Machine Learning in Chess. ICCA Journal, Vol. 9, No. 1
- Jens Christensen, Richard Korf (1986). A Unified Theory of Heuristic Evaluation functions and Its Applications to Learning. Proceedings of the AAAI-86, pp. 148-152, pdf.
- Ryszard Michalski, Jaime Carbonell, Tom Mitchell (1986). Machine Learning: An Artificial Intelligence Approach, Volume II. Morgan Kaufmann, ISBN 0-934613-00-1. google books
- Tom Mitchell, Jaime Carbonell, Ryszard Michalski (1986). Machine Learning: A Guide to Current Research. The Kluwer International Series in Engineering and Computer Science, Vol. 12
- Ivan Bratko, Igor Kononenko (1986). Learning Rules from Incomplete and Noisy Data. Proceedings Unicom Seminar on the Scope of Artificial Intelligence in Statistics. Technical Press
1987- David Slate (1987). A Chess Program that uses its Transposition Table to Learn from Experience. ICCA Journal, Vol. 10, No. 2
- Ronald L. Rivest (1987). Learning Decision Lists. Machine Learning 2,3, pdf 2001
- Gerald Tesauro, Terrence J. Sejnowski (1987). A 'Neural' Network that Learns to Play Backgammon. NIPS 1987
- Alen Shapiro (1987). Structured Induction in Expert Systems. Turing Institute Press in association with Addison-Wesley Publishing Company, Workingham, UK
- Alberto Maria Segre (1987). On the Operationality/Generality Trade-off in Explanation-based Learning. IJCAI 1987, pdf
- Alberto Maria Segre (1987). Explanation-Based Learning of Generalized Robot Assembly Plans. Ph.D. thesis, University of Illinois at Urbana-Champaign, Advisor: Gerald Francis DeJong, II
- Eric B. Baum, Frank Wilczek (1987). Supervised Learning of Probability Distributions by Neural Networks. NIPS 1987
1988- Bruce Abramson (1988). Learning Expected-Outcome Evaluators in Chess. Proceedings of the 1988 AAAI Spring Symposium Series: Computer Game Playing, 26-28.
- Richard Sutton (1988). Learning to Predict by the Methods of Temporal Differences. Machine Learning, Vol. 3, No. 1, pdf
- David E. Goldberg, John H. Holland (1988). Genetic Algorithms and Machine Learning. Machine Learning, Vol. 3
- Kenneth A. De Jong, Alan C. Schultz (1988). Using Experience-Based Learning in Game Playing. Proceedings of the Fifth International Machine Learning Conference, CiteSeerX » Othello
- Kai-Fu Lee, Sanjoy Mahajan (1988). A Pattern Classification Approach to Evaluation Function Learning. Artificial Intelligence, Vol. 36, No. 1
- Paul E. Utgoff (1988). ID5: An incremental ID3. ML 1988
19891990 ...
- Richard Sutton, Andrew Barto (1990). Time Derivative Models of Pavlovian Reinforcement. Learning and Computational Neuroscience: Foundations of Adaptive Networks: 497-537.
- Bruce Abramson (1990). On Learning and Testing Evaluation Functions. Journal of Experimental and Theoretical Artificial Intelligence 2: 241-251.
- Tony Scherzer, Linda Scherzer, Dean Tjaden (1990). Learning in Bebe. Computers, Chess, and Cognition » Mephisto Best-Publication Award
- Yves Kodratoff, Ryszard Michalski (1990). Machine Learning: An Artificial Intelligence Approach, Volume III. Morgan Kaufmann, ISBN 1-55860-119-8. google books
- Michèle Sebag (1990). A symbolic-numerical approach for supervised learning from examples and rules. Ph.D. thesis, Paris Dauphine University
1991- Robert Schapire (1991). The Design and Analysis of Efficient Learning Algorithms. Ph.D. thesis, Massachusetts Institute of Technology, supervisor Ronald L. Rivest, pdf
- Gerhard Mehlsam, Hermann Kaindl, Wilhelm Barth (1991). Feature Construction During Tree Learning. GWAI 1991: 50-61.
- Alex van Tiggelen (1991). Neural Networks as a Guide to Optimization - The Chess Middle Game Explored. ICCA Journal, Vol. 14, No. 3
- William Tunstall-Pedoe (1991). Genetic Algorithms Optimizing Evaluation Functions. ICCA Journal, Vol. 14, No. 3
- Tony Scherzer, Linda Scherzer, Dean Tjaden (1991). Learning in Bebe. ICCA Journal, Vol. 14, No. 4
- Steven Walczak (1991). Predicting Actions from Induction on Past Performance. Proceedings of the 8th International Workshop on Machine Learning , pp. 275-279. Morgan Kaufmann
- Paul E. Utgoff, Jeffery A. Clouse (1991). Two Kinds of Training Information for Evaluation Function Learning. University of Massachusetts, Amherst, Proceedings of the AAAI 1991
- Byoung-Tak Zhang, Gerd Veenker (1991). Neural networks that teach themselves through genetic discovery of novel examples. IEEE International Joint Conference on Neural Networks
1992- Miroslav Kubat (1992). Introduction to Machine Learning. Advanced Topics in Artificial Intelligence 1992
- Michael Bain (1992). Learning optimal chess strategies. Proc. Intl. Workshop on Inductive Logic Programming (ed. Stephen Muggleton), Institute for New Generation Computer Technology, Tokyo, Japan.
- Eduardo F. Morales (1992). First-Order Induction of Patterns in Chess. Ph.D. Thesis, The Turing Institute, University of Strathclyde, Glasgow
- Eduardo F. Morales (1992). Learning Chess Patterns. Inductive Logic Programming (ed. Stephen Muggleton), Academic Press, The Apic Series, London, UK
- Gerald Tesauro (1992). Temporal Difference Learning of Backgammon Strategy. ML 1992
- Chris Watkins, Peter Dayan (1992). Q-learning. Machine Learning, Vol. 8, No. 2
- Gerald Tesauro (1992). Practical Issues in Temporal Difference Learning. Machine Learning, Vol. 8, No. 3-4
- Manuela Veloso (1992). Learning by Analogical Reasoning in General Purpose Problem Solving. Ph.D. thesis, Carnegie Mellon University, advisor Jaime Carbonell
1993- Michael Gherrity (1993). A Game Learning Machine. Ph.D. Thesis, University of California, San Diego, zipped ps
- Shaul Markovitch, Yaron Sella (1993). Learning of Resource Allocation Strategies for Game Playing, The proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambery, France. pdf
- David Carmel, Shaul Markovitch (1993). Learning Models of Opponent's Strategy in Game Playing. AAAI Proceedings, CiteSeerX
- Dan Geiger, Azaria Paz, Judea Pearl (1993). Learning simple causal structures. International Journal of Intelligent Systems, 8, pp. 231-247.
- Sebastian Thrun, Tom Mitchell (1993). Integrating Inductive Neural Network Learning and Explanation-Based Learning. IJCAI 1993, zipped ps
- Alois Heinz, Christoph Hense (1993). Bootstrap learning of α-β-evaluation functions. ICCI 1993, pdf
19941995 ...
- Gerhard Mehlsam, Hermann Kaindl, Wilhelm Barth (1995). Feature Construction during Tree Learning. GOSLER Final Report 1995: 391-403
- Chris McConnell (1995). Tuning Evaluation Functions for Search. ps or pdf from CiteSeerX
- David Heckerman, Dan Geiger, Max Chickering (1995). Learning Bayesian Networks: The Combination of Knowledge and Statistical Data. Machine Learning, Vol. 20, pdf
- Tristan Cazenave (1995). Learning and Problem Solving in Gogol, a Go playing program. pdf
- Gerald Tesauro (1995). Temporal Difference Learning and TD-Gammon. Communications of the ACM Vol. 38, No. 3
- Sebastian Thrun (1995). Learning to Play the Game of Chess. in Gerald Tesauro, David S. Touretzky, Todd K. Leen (eds.) Advances in Neural Information Processing Systems 7, MIT Press
- Marco Wiering (1995). TD Learning of Game Evaluation Functions with Hierarchical Neural Architectures. Master's thesis, University of Amsterdam, pdf
- Michael A. Arbib (ed.) (1995, 2002). The Handbook of Brain Theory and Neural Networks. The MIT Press
- Nicol N. Schraudolph (1995). Optimization of Entropy with Neural Networks. Ph.D. thesis, University of California, San Diego
- Robert W. Howard (1995). Learning and Memory: Major Ideas, Principles, Issues and Applications. Praeger, amazon.com
1996- Leemon C. Baird III, Mance E. Harmon, A. Harry Klopf (1996). Reinforcement Learning: An Alternative Approach to Machine Intelligence. pdf
- Sebastian Thrun (1996). Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer Academic Publishers
- Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore (1996). Reinforcement Learning: A Survey. JAIR Vol. 4, pdf
- Eduardo F. Morales (1996). Learning Playing Strategies in Chess. Computational Intelligence, Vol. 12, No. 1, CiteSeerX
- Wee Sun Lee (1996). Agnostic Learning and Single Hidden Layer Neural Networks. Ph.D. thesis, Australian National University, ps
- Johannes Fürnkranz (1996). Machine Learning in Computer Chess: The Next Generation. ICCA Journal, Vol. 19, No. 3, zipped ps
- Adriaan de Groot, Fernand Gobet (1996). Perception and memory in chess. Heuristics of the professional eye. Assen: Van Gorcum, The Netherlands. ISBN 90-232-2949-5. Chapter 9; A discussion: Two authors, two different views? word
- Stuart Russell (1996). Machine Learning. Chapter 4 of M. A. Boden (Ed.), Artificial Intelligence, Academic Press. Part of the Handbook of Perception and Cognition, ps
- Barney Pell, Susan L. Epstein, Robert Levinson (1996). Introduction to the special issue on games: Structure and Learning. Computational Intelligence, Vol. 12, No. 1, pdf
- Robert Levinson (1996). General Game-Playing and Reinforcement Learning. Computational Intelligence, Vol. 12, No. 1
- Tristan Cazenave (1996). Learning to forecast by explaining the consequences of actions. pdf
- Tristan Cazenave (1996). Self fuzzy learning. pdf
- Yoav Freund, Robert Schapire (1996). Game Theory, On-line Prediction and Boosting. COLT 1996, pdf
1997- Yoav Freund, Robert Schapire (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, Vol. 55, No. 1, 1996 pdf » AdaBoost
- Sepp Hochreiter, Jürgen Schmidhuber (1997). Long short-term memory. Neural Computation, Vol. 9, No. 8, pdf [17]
- Eduardo F. Morales (1997). On Learning How to Play. Advances in Computer Chess 8, CiteSeerX
- Don Beal, Martin C. Smith (1997). Learning Piece Values Using Temporal Differences. ICCA Journal, Vol. 20, No. 3
- Kieran Greer, Piyush Ojha, David A. Bell (1997). Learning Search Heuristics from Examples: A Study in Computer Chess, Seventh Conference of the Spanish Association for Artificial Intelligence, CAEPIA’97, November, pp. 695-704.
- Nir Friedman, Moises Goldszmidt, David Heckerman, Stuart Russell (1997). Where is the Impact of Bayesian Networks in Learning? In Proc. Fifteenth International Joint Conference on Artificial Intelligence, Nagoya, Japan, ps
- Ronald Parr, Stuart Russell (1997). Reinforcement Learning with Hierarchies of Machines. In Advances in Neural Information Processing Systems 10, MIT Press, zipped ps
- Tristan Cazenave (1997). Gogol (an Analytical Learning Program). IJCAI'97, pdf
- Tom Mitchell (1997). Machine Learning. McGraw Hill
- Michèle Sebag (1997). Stochastic Heuristics for Machine Learning & Machine Learning for Stochastic Optimization. Habilitation, Paris-Sud 11 University
- William Uther, Manuela M. Veloso (1997). Adversarial Reinforcement Learning. Carnegie Mellon University, ps
- William Uther, Manuela M. Veloso (1997). Generalizing Adversarial Reinforcement Learning. Carnegie Mellon University, ps
- Marco Wiering, Jürgen Schmidhuber (1997). HQ-learning. Adaptive Behavior, Vol. 6, No 2
1998- Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). Knightcap: A chess program that learns by combining td(λ) with game-tree search, Proceedings of the 15th International Conference on Machine Learning, pdf via citeseerX
- Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). TDLeaf(lambda): Combining Temporal Difference Learning with Game-Tree Search. Australian Journal of Intelligent Information Processing Systems, Vol. 5 No. 1, arXiv:cs/9901001
- Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). Experiments in Parameter Learning Using Temporal Differences. ICCA Journal, Volume 21 No. 2, pdf
- Lev Finkelstein, Shaul Markovitch (1998). Learning to Play Chess Selectively by Acquiring Move Patterns. ICCA Journal, Vol. 21, No. 2, pdf
- Csaba Szepesvári (1998). Reinforcement Learning: Theory and Practice. Proceedings of the 2nd Slovak Conference on Artificial Neural Networks, zipped ps
- Richard Sutton, Andrew Barto (1998). Reinforcement Learning: An Introduction. MIT Press
- Ryszard Michalski, Ivan Bratko, Miroslav Kubat (eds.) (1998). Machine Learning and Data Mining: Methods and Applications. John Wiley & Sons
- Nobusuke Sasaki, Yasuji Sawada, Jin Yoshimura (1998). A Neural Network Program of Tsume-Go. CG 1998 [18]
- Tristan Cazenave (1998). Machine Introspection for Machine Learning. Tucson 1998, pdf
- Tristan Cazenave (1998). Integration of Different Reasoning Modes in a Go Playing and Learning System. pdf
- Tristan Cazenave (1998). Learning with Fuzzy Definitions of Goals. pdf
- Ryszard Michalski (1998). Learnable Evolution: Combining Symbolic and Evolutionary Learning. Proceedings of the Fourth International Workshop on Multistrategy Learning (MSL'98)
- Krzysztof Krawiec, Roman Slowinski, Irmina Szczesniak (1998). Pedagogical Method for Extraction of Symbolic Knowledge from Neural Networks. Rough Sets and Current Trends in Computing 1998
- Marco Wiering, Jürgen Schmidhuber (1998). Fast online Q (λ). Machine Learning, Vol. 33, No. 1
1999Miroslav Kubat, Ivan Bratko, Ryszard Michalski (1998). A Review of Machine Learning Methods. pdf
2000 ...
- Miroslav Kubat, Jan Žižka (2000). Learning Middle Game Patterns in Chess: A Case Study. Lecture Notes in Computer Science, Vol. 1821, Springer
- Vladimir Vapnik (2000). The nature of statistical learning theory. Springer
- Sebastian Thrun, Michael L. Littman (2000). A Review of Reinforcement Learning. AI Magazine, Vol. 21, No. 1
- Johannes Fürnkranz (2000). Machine Learning in Games: A Survey. Austrian Research Institute for Artificial Intelligence, OEFAI-TR-2000-3, pdf
- Johannes Fürnkranz, Bernhard Pfahringer, Hermann Kaindl, Stefan Kramer (2000). Learning to Use Operational Advice. ECAI-00, pdf
- Jack van Rijswijck (2000). Learning from Perfection: A Data Mining Approach to Evaluation Function Learning in Awari. CG 2000, pdf
- Robert Levinson, Ryan Weber (2000). Chess Neighborhoods, Function Combination, and Reinforcement Learning. CG 2000
- Jan Ramon, Tom Francis, Hendrik Blockeel (2000). Learning a Go Heuristic with Tilde. CG 2000
- Levente Kocsis, Jos Uiterwijk, Jaap van den Herik (2000). Learning Time Allocation using Neural Networks. CG 2000, postscript
- Michael Buro (2000). Toward Opening Book Learning. Games in AI Research (eds. Jaap van den Herik and Hiroyuki Iida), pp. 47-54. Universiteit Maastricht, Maastricht, The Netherlands. ISBN 90-621-6416-1.
- Fabien Letouzey, François Denis, Rémi Gilleron (2000). Learning from Positive and Unlabeled Examples. ALT 2000: 71-85, ps
- Andrew Ng, Stuart Russell (2000). Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, California: Morgan Kaufmann, pdf
- Dean F. Hougen, Maria Gini, James R. Slagle (2000). An Integrated Connectionist Approach to Reinforcement Learning for Robotic Control. ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
- Ryszard Michalski (2000). LEARNABLE EVOLUTION MODEL: Evolutionary Processes Guided by Machine Learning. Machine Learning, Vol. 38 [19]
- Jonathan Baxter, Andrew Tridgell, Lex Weaver (2000). Learning to Play Chess Using Temporal Differences. Machine Learning, Vol 40, No. 3, pdf
- Michael Bain, Stephen Muggleton, Ashwin Srinivasan (2000). Generalising Closed World Specialisation: A Chess End Game Application. CitySeerX
2001- Nicol N. Schraudolph, Peter Dayan, Terrence J. Sejnowski (2001). Learning to Evaluate Go Positions via Temporal Difference Methods. in Norio Baba, Lakhmi C. Jain (eds.) (2001). Computational Intelligence in Games, Studies in Fuzziness and Soft Computing. Physica-Verlag, revised version of 1994 paper
- Jonathan Schaeffer, Markian Hlynka, Vili Jussila (2001). Temporal Difference Learning Applied to a High-Performance Game-Playing Program. IJCAI 2001
- Michael Bowling, Manuela M. Veloso (2001). Rational and Convergent Learning in Stochastic Games. IJCAI 2001
- Levente Kocsis, Jos Uiterwijk, Jaap van den Herik (2001). Move Ordering using Neural Networks, IEA/AIE 2001, LNCS 2070, 45-50 ps
- Marty Hirsch (2001). Machine Learning in MChess Professional. Advances in Computer Games 9
- Yngvi Björnsson, Tony Marsland (2001). Learning Search Control in Adversary Games. Advances in Computer Games 9, pp. 157-174. pdf
- Robert Levinson, Ryan Weber (2001). Chess Neighborhoods, Function Combinations and Reinforcements Learning. In Computers and Games (eds. Tony Marsland and I. Frank). Lecture Notes in Computer Science,. Springer,. pdf
- Jean Hayes Michie (2001). Machine Learning and Light Relief: A Review of Truth from Trash. AI Magazine Vol. 22 No. 4, pdf
- Pieter Spronck, Ida Sprinkhuizen-Kuyper, Eric Postma (2001). Infused Evolutionary Learning. Proceedings of the Eleventh Belgian-Dutch Conference on Machine Learning, pdf, pdf
- Charles Elkan (2001). The Foundations of Cost-Sensitive Learning. IJCAI 2001
- Alex B. Meijer, Henk Koppelaar (2001). A learning architecture for the game of Go. Game-On 2001
- Johannes Fürnkranz, Miroslav Kubat (2001). Machines that Learn to Play Games. Advances in Computation: Theory and Practice, Vol. 8,. NOVA Science Publishers
2002- Yngvi Björnsson, Tony Marsland (2002). Learning Control of Search Extensions. Proceedings of the 6th Joint Conference on Information Sciences (JCIS 2002), pp. 446-449. pdf
- Michael Buro (2002). Improving Mini-max Search by Supervised Learning. Artificial Intelligence, Vol. 134, No. 1, pp. 85-99. ISSN 0004-3702. pdf
- Levente Kocsis, Jos Uiterwijk, Eric Postma, Jaap van den Herik (2002). The Neural MoveMap Heuristic in Chess. CG 2002, ps
- Erik van der Werf, Jos Uiterwijk, Eric Postma, Jaap van den Herik (2002). Local Move Prediction in Go. CG 2002
- Ari Shapiro, Gil Fuchs, Robert Levinson (2002). Learning a Game Strategy Using Pattern-Weights and Self-play. CG 2002, pdf
- Mark Winands, Levente Kocsis, Jos Uiterwijk, Jaap van den Herik (2002). Temporal difference learning and the Neural MoveMap heuristic in the game of Lines of Action. In Mehdi, Q,., Gouch, N., and Cavazza, M., editors, GAME-ON 2002 3rd International Conference on Intelligent Games and Simulation, pages 99-103. SCS Europe Bvba. pdf
- Roman Grekovs (2002). Methods of Fuzzy Pattern Recognition Riga Technical University, ps, covers Fuzzy Kora algorithm
- Pieter Spronck, Ida Sprinkhuizen-Kuyper, Eric Postma (2003). Improved opponent intelligence trough offline learning. International Journal of Intelligent Games & Simulation, Vol. 2
- Krzysztof Krawiec (2002). Genetic Programming-based Construction of Features for Machine Learning and Knowledge Discovery Tasks. Genetic Programming and Evolvable Machines, Vol. 3, No. 4
- Peter Auer, Nicolò Cesa-Bianchi, Paul Fischer (2002). Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, Vol. 47, No. 2, pdf
- Paul E. Utgoff, David J. Stracuzzi (2002). Many-Layered Learning. Neural Computation, Vol. 14, No. 10, pdf
2003- Levente Kocsis, Jaap van den Herik, Jos Uiterwijk (2003). Two Learning Algorithms for Forward Pruning. ICGA Journal, Vol 26, No. 3, ps
- Levente Kocsis (2003) Learning Search Decisions, PhD thesis, Universiteit Maastricht ps
- Marco Block-Berlitz (2003). Reinforcement Learning in der Schachprogrammierung. Studienarbeit, Freie Universität Berlin, Dozent: Prof. Dr. Raúl Rojas, pdf (German)
- Dave Gomboc, Tony Marsland, Michael Buro (2003). Evaluation Function Tuning via Ordinal Correlation. Advances in Computer Games 10, pdf
- Stuart Russell, Peter Norvig (2003). Artificial Intelligence: A Modern Approach. 2nd edition, 3rd edition 2009
- Judea Pearl, Stuart Russell (2003). Bayesian Networks. In Michael A. Arbib, Ed., The Handbook of Brain Theory and Neural Networks, 2nd edition, MIT Press, pdf
- David J.C. MacKay (2003). Information Theory, Inference, and Learning Algorithms.
- Pedro Campos, Thibault Langlois (2003). Abalearn: a Program that Learns How to Play Abalone. ICGA Journal, Vol. 26, No. 4
- David Gleich (2003). Machine Learning in Computer Chess: Genetic Programming and KRK. Harvey Mudd College, pdf
- Henk Mannen (2003). Learning to play chess using reinforcement learning with database games. Master’s thesis, Cognitive Artificial Intelligence, Utrecht University
- Jan Žižka, Michal Mádr (2003). Learning Representative Patterns from Real Chess Positions: A Case Study. IICAI 2003
20042005 ...
- Dave Gomboc, Michael Buro, Tony Marsland (2005). Tuning evaluation functions by maximizing concordance Theoretical Computer Science, Volume 349, Issue 2, pp. 202-229, pdf
- David B. Fogel, Timothy J. Hays, Sarah L. Hahn, James Quon (2005). Further Evolution of a Self-Learning Chess Program. IEEE Symposium on Computational Intelligence & Games, CiteSeerX
- Tristan Caulfield, Joanna J. Bryson (2005). Chess by Imitation. Department of Computer Science, University of Bath, pdf [21]
- Marco Wiering, Jan Peter Patist, Henk Mannen (2005). Learning to Play Board Games using Temporal Difference Methods. Technical Report, Utrecht University, UU-CS-2005-048, pdf
- David J. Stracuzzi (2005). Scalable learning in many layers. University of Massachusetts Amherst, TR-05-02, pdf
- Levente Kocsis, Csaba Szepesvári, Mark Winands (2005). RSPSA: Enhanced Parameter Optimization in Games. Advances in Computer Games 11, pdf
- Christian Posthoff, Michael Schlosser (2005). Optimal strategies — Learning from examples — Boolean equations. in Klaus P. Jantke, Steffen Lange (eds.) (2005). Algorithmic Learning for Knowledge-Based Systems, Lecture Notes in Computer Science 961, Springer
2006- Levente Kocsis, Csaba Szepesvári (2006). Universal Parameter Optimisation in Games Based on SPSA. Machine Learning, Special Issue on Machine Learning and Games, Vol. 63, No. 3
- Sverrir Sigmundarson, Yngvi Björnsson. (2006) Value Back-Propagation vs. Backtracking in Real-Time Search. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Workshop on Learning For Search, pp. 136–141, AAAI Press, Boston, Massachusetts, USA, July 2006. pdf
- Sylvain Gelly, Olivier Teytaud, Nicolas Bredèche, Marc Schoenauer (2006). Universal Consistency and Bloat in GP. Some theoretical considerations about Genetic Programming from a Statistical Learning Theory viewpoint. pdf
- Sylvain Gelly, Jérémie Mary, Olivier Teytaud (2006). Learning for stochastic dynamic programming. pdf
- Olivier Teytaud, Sylvain Gelly (2006). General lower bounds for evolutionary algorithms. pdf
- Makoto Miwa, Daisaku Yokoyama, Takashi Chikayama (2006). Automatic Construction of Static Evaluation Functions for Computer Game Players. ALT ’06
- Tom Mitchell (2006). The Discipline of Machine Learning. CMU-ML-06-108, Carnegie Mellon University, pdf
- Tom Mitchell (2006). Human and Machine Learning. Carnegie Mellon University, slides as pdf
- Jeff Rollason (2006). Playing Stronger by learning. AI Factory, Winter 2006
- Simon Lucas, Thomas Philip Runarsson (2006). Temporal Difference Learning versus Co-Evolution for Acquiring Othello Position Evaluation. IEEE Symposium on Computational Intelligence and Games » Othello
- Nicolò Cesa-Bianchi, Gábor Lugosi (2006). Prediction, Learning, and Games. Cambridge University Press
- David J. Stracuzzi (2006). Scalable Knowledge Acquisition through Cumulative Learning and Memory Organization. Ph.D. thesis, University of Massachusetts Amherst, advisor Paul E. Utgoff, pdf
- Michael Bowling, Johannes Fürnkranz, Thore Graepel, Ron Musick (2006). Machine learning and Games. Machine Learning, Vol. 63, No. 3
2007- Sylvain Gelly, Olivier Teytaud, Jérémie Mary (2007). Active learning in regression, with application to stochastic dynamic programming. ICINCO and CAP, pdf
- Sylvain Gelly (2007). A Contribution to Reinforcement Learning; Application to Computer Go. Ph.D. thesis, pdf
- Jean-Yves Audibert, Rémi Munos, Csaba Szepesvári (2007). Tuning Bandit Algorithms in Stochastic Environments. pdf
- Makoto Miwa, Daisaku Yokoyama, Takashi Chikayama (2007). Automatic Generation of Evaluation Features for Computer Game Players. pdf
- Yong Duan, Baoxia Cui, Xinhe Xu (2007). State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network. ISNN 2007
- Yasuhiro Osaki, Kazutomo Shibahara, Yasuhiro Tajima, Yoshiyuki Kotani (2007). Reinforcement Learning of Evaluation Functions Using Temporal Difference-Monte Carlo learning method. 12th Game Programming Workshop
- Igor Kononenko, Matjaž Kukar (2007). Machine Learning and Data Mining: Introduction to Principles and Algorithms.
- Krzysztof Krawiec (2007). Generative Learning of Visual Concepts using Multiobjective Genetic Programming. Pattern Recognition Letters, Vol. 28, No. 16
- Simon Lucas (2007). Learning to play Othello with N-tuple systems. Australian Journal of Intelligent Information Processing Systems, Special Issue on Game Technology, Vol. 9, No. 4 » Othello
- Edward P. Manning (2007). Temporal Difference Learning of an Othello Evaluation Function for a Small Neural Network with Shared Weights. IEEE Symposium on Computational Intelligence and AI in Games » Othello
- David J. Stracuzzi (2007). Randomized Feature Selection. in Huan Liu, Hiroshi Motoda (eds.) Computational Methods of Feature Selection. CRC Press, pdf
- Johannes Fürnkranz (2007). Recent advances in machine learning and game playing. ÖGAI Journal, Vol. 26, No. 2, Computer Game Playing, pdf
2008- Marco Block, Maro Bader, Ernesto Tapia, Marte Ramírez, Ketill Gunnarsson, Erik Cuevas, Daniel Zaldivar, Raúl Rojas (2008). Using Reinforcement Learning in Chess Engines. CONCIBE SCIENCE 2008, Research in Computing Science: Special Issue in Electronics and Biomedical Engineering, Computer Science and Informatics, ISSN:1870-4069, Vol. 35, pp. 31-40, Guadalajara, Mexico, pdf
- Sacha Droste, Johannes Fürnkranz (2008). Learning of Piece Values for Chess Variants. Technical Report TUD–KE–2008-07, Knowledge Engineering Group, TU Darmstadt, pdf
- Sacha Droste, Johannes Fürnkranz (2008). Learning the Piece Values for three Chess Variants. ICGA Journal, Vol 31, No. 4
- Richard Sutton, Csaba Szepesvári, Hamid Reza Maei (2008). A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation, pdf (draft)
- Matej Guid, Martin Možina, Jana Krivec, Aleksander Sadikov, Ivan Bratko (2008). Learning Positional Features for Annotating Chess Games: A Case Study. CG 2008, pdf
- Martin Možina, Matej Guid, Jana Krivec, Aleksander Sadikov, Ivan Bratko (2008). Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning. 18th European Conference on Artificial Intelligence (ECAI 2008), Patras, Greece. pdf
- Cécile Germain-Renaud, Julien Pérez, Balázs Kégl, Charles Loomis (2008). Grid Differentiated Services: a Reinforcement Learning Approach. In 8th IEEE Symposium on Cluster Computing and the Grid. Lyon, pdf
- Yasuhiro Osaki, Kazutomo Shibahara, Yasuhiro Tajima, Yoshiyuki Kotani (2008). An Othello Evaluation Function Based on Temporal Difference Learning using Probability of Winning. CIG'08, pdf
- Antonio Fernández, Antonio Salmerón (2008). BayesChess: A computer chess program based on Bayesian networks. Pattern Recognition Letters, Vol. 29, No. 8
- Joaquin Vanschoren, Bernhard Pfahringer, Geoffrey Holmes (2008). Learning from the Past with Experiment Databases. PRICAI 2008, pdf
- Ilya Sutskever, Vinod Nair (2008). Mimicking Go Experts with Convolutional Neural Networks. ICANN 2008, pdf » Go
- Andrew Cook (2008). Chunk Learning and Move Prompting: Making Moves in Chess. Technical Report CSR-08-12, University of Birmingham
- Byoung-Tak Zhang (2008). Hypernetworks: A molecular evolutionary architecture for cognitive learning and memory. IEEE Computational Intelligence Magazine, Vol. 3, No. 3, pdf
- Maria Cutumisu, Michael Bowling, Duane Szafron, Richard Sutton (2008). Agent Learning using Action-Dependent Learning Rates in Computer Role-Playing Games. Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference, pdf
20092010 ...
- Jacek Mańdziuk (2010). Knowledge-Free and Learning-Based Methods in Intelligent Game Playing. Studies in Computational Intelligence, Vol. 276, Springer
- Joel Veness, Kee Siong Ng, Marcus Hutter, David Silver (2010). Reinforcement Learning via AIXI Approximation. Association for the Advancement of Artificial Intelligence (AAAI), pdf
- Omid David, Moshe Koppel, Nathan S. Netanyahu (2010). Expert-Driven Genetic Algorithms for Simulating Evaluation Functions. pdf
- Omid David, Nathan S. Netanyahu, Yoav Rosenberg, Moshe Shimoni (2010). Genetic Algorithms for Automatic Classification of Moving Objects. ACM Genetic and Evolutionary Computation Conference (GECCO '10), Portland, OR, pdf
- Omid David, Moshe Koppel, Nathan S. Netanyahu (2010). Genetic Algorithms for Automatic Search Tuning. ICGA Journal, Vol 33, No. 2
- Mesut Kirci (2010). Feature Learning using State Differences. Master's thesis, Department of Computing Science, University of Alberta, pdf » General Game Playing
- Amine Bourki, Matthieu Coulm, Philippe Rolet, Olivier Teytaud, Paul Vayssière (2010). Parameter Tuning by Simple Regret Algorithms and Multiple Simultaneous Hypothesis Testing. pdf
- Julien Pérez, Cécile Germain-Renaud, Balázs Kégl, Charles Loomis (2010). Multi-objective Reinforcement Learning for Responsive Grids. In The Journal of Grid Computing. pdf
- Jean-Yves Audibert (2010). PAC-Bayesian aggregation and multi-armed bandits. Habilitation thesis, Université Paris Est, pdf, slides as pdf
- Hamid Reza Maei, Richard Sutton (2010). GQ(λ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In Proceedings of the Third Conference on Artificial General Intelligence
- Karol Walędzik, Jacek Mańdziuk (2010). The Layered Learning method and its Application to Generation of Evaluation Functions for the Game of Checkers. 11. PPSN, pdf » Checkers
- Krzysztof Krawiec, Marcin Szubert (2010). Coevolutionary Temporal Difference Learning for small-board Go. IEEE Congress on Evolutionary Computation » Go
- Edward P. Manning (2010). Using Resource-Limited Nash Memory to Improve an Othello Evaluation Function. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 2, No. 1 » Othello
- Edward P. Manning (2010). Coevolution in a Large Search Space using Resource-limited Nash Memory. GECCO '10 » Othello
- Marco Wiering (2010). Self-play and using an expert to learn to play backgammon with temporal difference learning. Journal of Intelligent Learning Systems and Applications, Vol. 2, No. 2
2011- Joel Veness (2011). Approximate Universal Artificial Intelligence and Self-Play Learning for Games. Ph.D. thesis, University of New South Wales, supervisors: Kee Siong Ng, Marcus Hutter, Alan Blair, William Uther, John Lloyd; pdf
- Mesut Kirci, Nathan Sturtevant, Jonathan Schaeffer (2011). A GGP Feature Learning Algorithm. KI 25(1): 35-42, pdf » General Game Playing
- I-Chen Wu, Hsin-Ti Tsai, Hung-Hsuan Lin, Yi-Shan Lin, Chieh-Min Chang, Ping-Hung Lin (2011). Temporal Difference Learning for Connect6. Advances in Computer Games 13
- Tomoyuki Kaneko, Kunihito Hoki (2011). Analysis of Evaluation-Function Learning by Comparison of Sibling Nodes. Advances in Computer Games 13
- Jiao Wang, Shiyuan Li, Jitong Chen, Xin Wei, Huizhan Lv, Xinhe Xu (2011). 4*4-Pattern and Bayesian Learning in Monte-Carlo Go. Advances in Computer Games 13
- Charles Elkan (2011). Reinforcement Learning with a Bilinear Q Function. EWRL 2011
- Krzysztof Krawiec, Marcin Szubert (2011). Learning N-Tuple Networks for Othello by Coevolutionary Gradient Search. GECCO 2011, pdf
- Krzysztof Krawiec, Wojciech Jaśkowski, Marcin Szubert (2011). Evolving small-board Go players using Coevolutionary Temporal Difference Learning with Archives. Applied Mathematics and Computer Science, Vol. 21, No. 4
- Marcin Szubert, Wojciech Jaśkowski, Krzysztof Krawiec (2011). Learning Board Evaluation Function for Othello by Hybridizing Coevolution with Temporal Difference Learning. Control and Cybernetics, Vol. 40, No. 3, pdf » Othello
- Hamid Reza Maei (2011). Gradient Temporal-Difference Learning Algorithms. Ph.D. thesis, University of Alberta, advisor Richard Sutton, pdf
2012- Marco Wiering, Martijn Van Otterlo (2012). Reinforcement learning: State-of-the-art. Adaptation, Learning, and Optimization, Vol. 12, Springer
- Sjoerd van den Dries, Marco Wiering (2012). Neural-fitted TD-leaf learning for playing Othello with structured neural networks. IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, No. 11
- Amir Ban (2012). Automatic Learning of Evaluation, with Applications to Computer Chess. Discussion Paper 613, The Hebrew University of Jerusalem - Center for the Study of Rationality, Givat Ram
- Adrien Couetoux, Olivier Teytaud, Hassen Doghmen (2012). Learning a Move-Generator for Upper Confidence Trees. ICS 2012, Hualien, Taiwan, December 2012 » UCT
- Robert Schapire, Yoav Freund (2012). Boosting: Foundations and Algorithms. MIT Press
- Arthur Guez, David Silver, Peter Dayan (2012). Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search. NIPS 2012, pdf
- Peter Dayan (2012). How to set the switches on this thing. Current Opinion in Neurobiology, Vol. 22, pdf
2013István Szita (2012). Reinforcement Learning in Games. Chapter 17
- Arthur Guez, David Silver, Peter Dayan (2013). Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search. Journal of Artificial Intelligence Research, Vol. 48, pdf
- Katja Grace (2013). Algorithmic Progress in Six Domains. Technical report 2013-3, Machine Intelligence Research Institute, Berkeley, CA, pdf, 5 Game Playing, 5.1 Chess, 5.2 Go, 9 Machine Learning
- Marcin Szubert, Wojciech Jaśkowski, Paweł Liskowski, Krzysztof Krawiec (2013). Shaping Fitness Function for Evolutionary Learning of Game Strategies. GECCO 2013, pdf
- Marcin Szubert, Wojciech Jaśkowski, Krzysztof Krawiec (2013). On Scalability, Generalization, and Hybridization of Coevolutionary Learning: a Case Study for Othello. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 5, No. 3 » Othello
- Michiel van der Ree, Marco Wiering (2013). Reinforcement Learning in the Game of Othello: Learning Against a Fixed Opponent and Learning from Self-Play. ADPRL 2013
- Luuk Bom, Ruud Henken, Marco Wiering (2013). Reinforcement Learning to Train Ms. Pac-Man Using Higher-order Action-relative Inputs. ADPRL 2013 [25]
- Peter Auer, Marcus Hutter, Laurent Orseau (2013). Reinforcement Learning. Dagstuhl Reports, Vol. 3, No. 8, DOI: 10.4230/DagRep.3.8.1, URN: urn:nbn:de:0030-drops-43409
- Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller (2013). Playing Atari with Deep Reinforcement Learning. arXiv:1312.5602 [26] [27]
20142015 ...
- Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis (2015). Human-level control through deep reinforcement learning. Nature, Vol. 518
- Tobias Graf, Marco Platzner (2015). Adaptive Playouts in Monte Carlo Tree Search with Policy Gradient Reinforcement Learning. Advances in Computer Games 14
- Yuichiro Sato, Hiroyuki Iida, Jaap van den Herik (2015). Transfer Learning by Inductive Logic Programming. Advances in Computer Games 14
- Kokolo Ikeda, Takanari Shishido, Simon Viennot (2015). Machine-Learning of Shape Names for the Game of Go. Advances in Computer Games 14
- Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Veda Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, David Silver (2015). Massively Parallel Methods for Deep Reinforcement Learning. arXiv:1507.04296
- Matthew Lai (2015). Giraffe: Using Deep Reinforcement Learning to Play Chess. M.Sc. thesis, Imperial College London, arXiv:1509.01549v1 » Giraffe
- Hado van Hasselt, Arthur Guez, David Silver (2015). Deep Reinforcement Learning with Double Q-learning. arXiv:1509.06461
- Tom Schaul, John Quan, Ioannis Antonoglou, David Silver (2015). Prioritized Experience Replay. arXiv:1511.05952
- Miroslav Kubat (2015). An Introduction to Machine Learning. Springer
- Christian Wirth, Johannes Fürnkranz (2015). On Learning From Game Annotations. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 7, No. 3
2016- Dharshan Kumaran, Demis Hassabis, James L. McClelland (2016). What learning systems do intelligent agents need? Complementary Learning Systems Theory Updated. Trends in Cognitive Sciences, Vol. 20, No. 7, pdf
- Ziyu Wang, Nando de Freitas, Marc Lanctot (2016). Dueling Network Architectures for Deep Reinforcement Learning. arXiv:1511.06581
- Jialin Liu, Olivier Teytaud, Tristan Cazenave (2016). Fast seed-learning algorithms for games. CG 2016
- Omid E. David, Nathan S. Netanyahu, Lior Wolf (2016). DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess. ICAAN 2016, Lecture Notes in Computer Science, Vol. 9887, Springer, pdf preprint » DeepChess [34] [35]
- Ian Goodfellow, Yoshua Bengio, Aaron Courville (2016). Deep Learning. MIT Press
- Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, Koray Kavukcuoglu (2016). Reinforcement Learning with Unsupervised Auxiliary Tasks. arXiv:1611.05397v1
2017Forum Posts
1998 ...
2000 ...
2005 ...
2010 ...
2015 ...
External Links
Machine Learning
AI
Learning I
Learning II
Chess
Supervised Learning
AdaBoost from Wikipedia
Unsupervised Learning
Reinforcement Learning
TD Learning
Statistics
Naive Bayes classifier from Wikipedia
Probabilistic classification from Wikipedia
Outline of regression analysis from Wikipedia
Linear regression from Wikipedia
Logistic regression from Wikipedia
Normal distribution from Wikipedia
Pseudorandom number generator from Wikipedia
Pseudo-random number sampling from Wikipedia
Statistical randomness from Wikipedia
Markov Models
NNs
ANNs
- Artificial neural network from Wikipedia
- Artificial Neural Networks - Wikibooks
- Chess end games using Neural Networks
Topics- Artificial neuron from Wikipedia
- Backpropagation from Wikipedia
- Connectionism from Wikipedia
- Convolutional neural network from Wikipedia
- Feedforward neural network from Wikipedia
- Fuzzy neural network - Scholarpedia
- Multilayer perceptron from Wikipedia
- Neocognitron from Wikipedia
- Perceptron from Wikipedia
- Recursive neural network from Wikipedia
- Rprop from Wikipedia
- Time delay neural network from Wikipedia
RNNs- Recurrent neural network from Wikipedia
- Recurrent neural networks - Scholarpedia
- Recurrent Neural Networks by Jürgen Schmidhuber
- Boltzmann machine from Wikipedia
- Deep Learning from Wikipeadia
- Echo state network
- Hopfield network from Wikipedia
- Hopfield network - Scholarpedia
- Long short term memory from Wikipedia
BlogsThe Single Layer Perceptron
Hidden Neurons and Feature Space
Training Neural Networks Using Back Propagation in C#
Data Mining with Artificial Neural Networks (ANN)
Courses
References
Up one Level