Michie was head of the University of Edinburgh's Department of Machine Intelligence from 1965 until 1985 [8] , when he left to found the Turing Institute in Glasgow[9]. Michie researched on game theory and computer games and chess. He was a close friend of David Levy and involved in the famous Levy Bet with John McCarthy, which occurred during an AI-workshop in Edinburgh [10] , where Michie affirmed McCarthy to take the challenge by David Levy and even want to share the bet on McCarthy's site [11] .
Michie began his first experiments in machine learning in 1960. His tic-tac-toe machine MENACE (Machine Educable Noughts And Crosses Engine) demonstrated the basic principle of a self-reinforcing learning mechanism. MENACE employed Michie's conceptually simple general-purpose learning algorithm BOXES [15][16][17][18][19] which could also discover robust control strategies for the pole balancing problem [20][21] , but was soon employed industrially to evolve strategies for automatic control, such as controlling a steel mill [22].
In King and Rook Against King: Historical Background and a Problem on the Infinite Board at the first Advances in Computer Chess conference [26] , Donald Michie shows two positions, that are identical except one pawn, to demonstrate the need for search[27] .
rn3rk1/p1q1nppp/bp2p3/2ppP3/P2P4/2PB1N2/2P2PPP/R1BQK2R w KQ - ; bm Bxh7
rn3rk1/p1q1nppp/bp2p3/2ppP3/P2P4/2PB1N2/1P3PPP/R1BQK2R w KQ - ; am Bxh7
In 1980 I spent another summer in Edinburgh as a guest of Donald Michie. Since the low point of 1975, thanks to assiduous and inventive joint pursuit of funding possibilities by Donald and Jean, the Machine Intelligence Research Unit was alive with work focused on chess endgames. There were students, including Tim Niblett and Alen Shapiro. Danny Kopec was there, perhaps formally as a student, but de facto as the resident chess consultant. Ivan Bratko visited frequently. Alen was the administrator of the dream computing environment of that time: a small PDP-11 running Unix. ...
Donald Michie demonstrated the Human Window phenomenon with chess end games. He proposed a form of describing end-game knowledge that he called “advice” and described a formal language, Advice Language One[29], for expressing such advice. The language could be translated into a form that guided a computer to play the end-game at the level of skill of a chess expert. Soei Tan, Ivan Bratko and Danny Kopec were chess experts who used this framework to implement specific end games.
Once again, I did not get it. I could not help acting in my then usual role of Prolog evangelist and wanted to demonstrate that the beauty of Prolog was that it rendered superfluous things like Advice Language One. Accordingly I wrote a Prolog program that played an end game using Advice in DM’s sense. DM generously allowed me my say in a paper in the Tenth Machine Intelligence workshop. It’s a nice paper, but it does not get it.
One of the most thrilling times of my entire life was the month that Dr. Donald Michie spent with us in San Diego. And he, of course, is - was head of the computer science department at Oxford, I believe, for decades. He was on the original team with Turing that broke the Enigma code. And already he was quite an elderly gentleman when he came to work with us but that wasn't stopping him from having a very full schedule as the head of the I think it was the Turing Institute in Glasgow that headed up. Quite, totally an amazing human being.
Delightful and totally amazing. And he had this concept that he wanted to try out that he thought might possibly solve computer chess. And we spent a month exploring it. It was the idea of reaching a steady state. The idea was that you would establish a number of parameters of positional analysis and your program would score, independently score vast arrays of positions using this set of known parameters. And then the program would basically perform a cluster analysis and so you'd do it on a number of positions and on game after game after game of Grand Master Chess. You submitted we just we used hundreds of thousands of positions.
And then what it did was it took the evaluation that known chess theories said this position is worth this much. So we had an external evaluation because it came out of known Master Chess games. And then we had all of these parameters that our program was capable of evaluating and then you used this data to tune your weighing of the parameters. And you could also tune the weighting for different stages of the game. So at the opening, you could use a certain weight, mid-game, you could use a certain weight, in the midst of your king being attacked, could use a set of weights, when you're pressing an attack, you could use a set of weights, when they're past pawns on the board, you know, there were several different stages of the game that could have different weightings. And we used a program called Knowledge Seeker that helped you to determine these relative weightings. And so after a month of training the program, what you basically did was you take your total set of positions and you would use something like 80% of them as a training set and then the last 20 as the test set. And you'd find out, well, how did the program do in evaluating these positions it had never seen based on these that it had seen. And it did just a breathtaking job of determining the correct worth of the positions. And so we were so excited. We were going to turn it loose on its first play a game of chess. We were going to use this as the positional evaluator.
Yeah. It was, like, oh, it was breathtaking. And we watched the program play chess. It was- you could gasp for breath. No computer program ever played a game of chess like that. It looked like an incredibly promising seven-year-old. We lost the game in just a few moves but it lost it brilliantly. <laughter> It got its queen out there, it maneuvered its knight, it launched a king side attack, it sacrificed its queen. <laughter> Well, of course it sacrificed its queen. Do you realize, in every single Grand Master game of chess, when you sacrifice your queen, it's phenomenally brilliant. You are winning the game. So if you can find a way to get your queen out there and sacrifice her, well, you've won.
Besides AI work aimed at tournament play, particular aspects of the game have illuminated the intellectual mechanisms involved. Barbara Liskov demonstrated that what chess books teach about how to win certain endgames is not a program but more like a predicate comparing two positions to see if one is an improvement on the other. Such qualitative comparisons are an important feature of human intelligence and are needed for AI. Donald Michie, Ivan Bratko, Alen Shapiro, David Wilkins, and others have also used chess as a Drosophila to study intelligence. Newborn ignores this work, because it is not oriented to tournament play.
Jim Doran, Donald Michie (1966). Experiments with the Graph Traverser Program. Proc. Roy. Soc. (A), Vol. 294, pp. 235-259.
Donald Michie (1966). Game Playing and Game Learning Automata. Advances in Programming and Non-Numerical Computation, Leslie Fox (ed.), pp. 183-200. Oxford, Pergamon. » Includes Appendix: Rules of SOMAC by John Maynard Smith, introduces Expectiminimax tree[36]
Donald Michie, Roger A. Chambers (1968). Boxes: An experiment on adaptive control. In E. Dale and D. Michie, editors, Machine Intelligence 2, Edinburgh: Oliver & Boyd, pp. 137-152, pdf
Danny Kopec, Donald Michie (1983). Mismatch between machine representations and human concepts: dangers and remedies. FAST series No. 9 report. European Community, Brussels.
Donald Michie (1986). The superarticulacy phenomenon in the context of software manufacture. Proc. Roy. Soc. (A).
^Donald Michie, Roger A. Chambers (1968). Boxes: An experiment on adaptive control. In E. Dale and D. Michie, editors, Machine Intelligence 2, Edinburgh: Oliver & Boyd, pp. 137-152
was a British researcher and pioneer in artificial intelligence and game theory. During World War II, Michie worked with Alan Turing at Bletchley Park, with Jack Good and Shaun Wylie et al. in the section Newmanry headed by Max Newman, contributing to crack the German Lorenz cipher [4] [5]. In 1947-48, along with Wylie, Michie designed Machiavelli, a rival of Turing's Turochamp program [6] [7].
Michie was head of the University of Edinburgh's Department of Machine Intelligence from 1965 until 1985 [8] , when he left to found the Turing Institute in Glasgow [9]. Michie researched on game theory and computer games and chess. He was a close friend of David Levy and involved in the famous Levy Bet with John McCarthy, which occurred during an AI-workshop in Edinburgh [10] , where Michie affirmed McCarthy to take the challenge by David Levy and even want to share the bet on McCarthy's site [11] .
Table of Contents
Photos
Machine Learning
Michie began his first experiments in machine learning in 1960. His tic-tac-toe machine MENACE (Machine Educable Noughts And Crosses Engine) demonstrated the basic principle of a self-reinforcing learning mechanism. MENACE employed Michie's conceptually simple general-purpose learning algorithm BOXES [15] [16] [17] [18] [19] which could also discover robust control strategies for the pole balancing problem [20] [21] , but was soon employed industrially to evolve strategies for automatic control, such as controlling a steel mill [22].in the traditions of Heath Robinson and Charles Babbage [24] [25]
The Need for Search
In King and Rook Against King: Historical Background and a Problem on the Infinite Board at the first Advances in Computer Chess conference [26] , Donald Michie shows two positions, that are identical except one pawn, to demonstrate the need for search [27] .Chess Endgames
Quote by Maarten van Emden in I remember Donald Michie [28]:In 1980 I spent another summer in Edinburgh as a guest of Donald Michie. Since the low point of 1975, thanks to assiduous and inventive joint pursuit of funding possibilities by Donald and Jean, the Machine Intelligence Research Unit was alive with work focused on chess endgames. There were students, including Tim Niblett and Alen Shapiro. Danny Kopec was there, perhaps formally as a student, but de facto as the resident chess consultant. Ivan Bratko visited frequently. Alen was the administrator of the dream computing environment of that time: a small PDP-11 running Unix. ...
Donald Michie demonstrated the Human Window phenomenon with chess end games. He proposed a form of describing end-game knowledge that he called “advice” and described a formal language, Advice Language One [29] , for expressing such advice. The language could be translated into a form that guided a computer to play the end-game at the level of skill of a chess expert. Soei Tan, Ivan Bratko and Danny Kopec were chess experts who used this framework to implement specific end games.
Once again, I did not get it. I could not help acting in my then usual role of Prolog evangelist and wanted to demonstrate that the beauty of Prolog was that it rendered superfluous things like Advice Language One. Accordingly I wrote a Prolog program that played an end game using Advice in DM’s sense. DM generously allowed me my say in a paper in the Tenth Machine Intelligence workshop. It’s a nice paper, but it does not get it.
Solving Chess?
Quote by Kathleen Spracklen on Donald Michie and their experiments, excerpt from Oral History of Kathe and Dan Spracklen [30]:One of the most thrilling times of my entire life was the month that Dr. Donald Michie spent with us in San Diego. And he, of course, is - was head of the computer science department at Oxford, I believe, for decades. He was on the original team with Turing that broke the Enigma code. And already he was quite an elderly gentleman when he came to work with us but that wasn't stopping him from having a very full schedule as the head of the I think it was the Turing Institute in Glasgow that headed up. Quite, totally an amazing human being.
Delightful and totally amazing. And he had this concept that he wanted to try out that he thought might possibly solve computer chess. And we spent a month exploring it. It was the idea of reaching a steady state. The idea was that you would establish a number of parameters of positional analysis and your program would score, independently score vast arrays of positions using this set of known parameters. And then the program would basically perform a cluster analysis and so you'd do it on a number of positions and on game after game after game of Grand Master Chess. You submitted we just we used hundreds of thousands of positions.
And then what it did was it took the evaluation that known chess theories said this position is worth this much. So we had an external evaluation because it came out of known Master Chess games. And then we had all of these parameters that our program was capable of evaluating and then you used this data to tune your weighing of the parameters. And you could also tune the weighting for different stages of the game. So at the opening, you could use a certain weight, mid-game, you could use a certain weight, in the midst of your king being attacked, could use a set of weights, when you're pressing an attack, you could use a set of weights, when they're past pawns on the board, you know, there were several different stages of the game that could have different weightings. And we used a program called Knowledge Seeker that helped you to determine these relative weightings. And so after a month of training the program, what you basically did was you take your total set of positions and you would use something like 80% of them as a training set and then the last 20 as the test set. And you'd find out, well, how did the program do in evaluating these positions it had never seen based on these that it had seen. And it did just a breathtaking job of determining the correct worth of the positions. And so we were so excited. We were going to turn it loose on its first play a game of chess. We were going to use this as the positional evaluator.
Yeah. It was, like, oh, it was breathtaking. And we watched the program play chess. It was- you could gasp for breath. No computer program ever played a game of chess like that. It looked like an incredibly promising seven-year-old. We lost the game in just a few moves but it lost it brilliantly. <laughter> It got its queen out there, it maneuvered its knight, it launched a king side attack, it sacrificed its queen. <laughter> Well, of course it sacrificed its queen. Do you realize, in every single Grand Master game of chess, when you sacrifice your queen, it's phenomenally brilliant. You are winning the game. So if you can find a way to get your queen out there and sacrifice her, well, you've won.
AI as Sport
Quote by John McCarthy from AI as Sport [31][32]:Besides AI work aimed at tournament play, particular aspects of the game have illuminated the intellectual mechanisms involved. Barbara Liskov demonstrated that what chess books teach about how to win certain endgames is not a program but more like a predicate comparing two positions to see if one is an improvement on the other. Such qualitative comparisons are an important feature of human intelligence and are needed for AI. Donald Michie, Ivan Bratko, Alen Shapiro, David Wilkins, and others have also used chess as a Drosophila to study intelligence. Newborn ignores this work, because it is not oriented to tournament play.
See also
Selected Publications
[33] [34]1945
1960 ...
1970 ...
1980 ...
1990 ...
External Links
References
What links here?
Up one level