Artificial Intelligence, (AI)
the intelligence of machines and the branch of computer science that aims to create it. While 'machine intelligence' was already mentioned by Alan Turing in the 1940s during his research at Bletchley Park[1][2] , the term 'artificial intelligence' was coined by John McCarthy in the proposal for the 1956 Dartmouth Conference[3] . In its beginning, Computer Chess was called the Drosophila of Artificial Intelligence. In the 70s, when brute-force programs started to dominate, and competitive and commercial aspects have taken precedence over using chess as a scientific domain, the AI community more and more lost interest in chess [4] . In disagreement with the AI establishment in the 80s, Peter W. Frey concluded, that the AI community should follow computer chess methods rather than the other way around [5] .
In the 1970s DM was fond of proclaiming “Chess, the Drosophila Melanogaster of Artificial Intelligence”. A public pronouncement of his point of view can be found in an interview with H.J. van den Herik held in 1981 (“Computerschaak, schaakwereld en kunstmatige intelligentie” by H.J. van den Herik, Academic Service, 1983)[8]. It is a long interview, from which I quote DM’s answer to the question: “What do you think about the applicability of the research done in computer chess?”
The applicability is I think enormous and quite critical. Scientific study of computer chess, which includes the technological work, but goes far beyond that, is the most important scientific study that is going in the world at present. In the same sense, if I were asked what was the most important study in process during the first world war, I would say the genetic breeding experiments on the drosophila fruit fly by Morgan and his colleagues. The analogy is very good. The final impact of the early work in laying down the basic theoretical framework for the subject was just enormous, unimaginable. We see now the industrial take-off of genetic engineering which is the delayed final outcome for human society of the fly-breeding work. The use of chess now as a preliminary to the knowledge engineering and cognitive engineering of the future is exactly similar, in my opinion, to the work on drosophila. It should be encouraged in a very intense way, for these reasons.
Alexander Kronrod, a Russian AI researcher, said 'Chess is the Drosophila of AI.' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs. Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
First, the author of this quote is simple WRONG. The generally accepted theory of how humans play chess is that the brain does fuzzy matching on a database of several hundred thousand positions. The amount of computation needed to do that is FAR greater than the amount expended by a "conventional" AB searcher, and yet the computer plays MUCH better than the average human. The simple fact of the matter, which you refuse to recognize, is that AB-search with reasonable heuristics is the most efficient way to play chess with Von Neuman machines.
Secondly, the existence of current amateur and commercial programs does nothing to prevent you from writing whatever kind of chess playing agent you want. If you want to experiment, no one is stopping you or him from applying to NSF for research money and giving it a shot. The existence of "fruit fly races" - and his fruit fly analogy is totally flawed. A better analogy would be that a geneticist decided to make a fruit fly that could run faster than a human - does nothing to prevent casual study of one's own fruit flies.
Only in 1955 did a real opportunity arise for A.S. Kronrod to work with an electronic computer. It was the M2 computer constructed by I.S. Bruk, M.A. Kartsev, and N.Ya. Matyukhin in the laboratory of the Institute of Energy named after Krzhizhanovsky and directed by I.S. Bruk. This laboratory later became the to Institute for Electronic Control Machines. The mathematics/machine interface was developed by A.L. Brudno, a great personal and likeminded friend of A.S. Kronrod.
When he started with enthusiasm to program the M2 machine, A.S. Kronrod quickly came to the conclusion that computing is not the main application of computers. The main goal is to teach the computer to think, i.e., what is now called "artificial intelligence" and in those days "heuristic programming".
A.S. Kronrod captivated a large group of mathematicians and physicists (G.M. Adelson Velsky, A.L. Brudno, M.M. Bongard, E.M. Landis, N.N. Konstantinov, and others). Although some of them had arrived at this kind of problems on their own, they unconditionally accepted his leadership. In the room next to the one housing the M2 machine the work of the new Kronrod seminar started. At the gatherings there were heated discussions on pattern recognition problems (this work was led by M.M. Bongard; versions of his program "Kora" are still functioning), transportation problems (the problem was introduced to the seminar and actively worked on by A.L. Brudno), problems of automata theory, and many other problems.
Intellectual Foundations
Quote from Biography AS Kronrod by Alexander Yershov [15]
In 1958, Kronrod, Adelson-Velsky, and Landis selected "Snap" ("подкидного дурака") as the intellectual foundations for the development of the game heuristic programming. The program itself was a fiasco - but the basic principles (board games, search techniques and limited depth) were formulated. Further research laboratories in the field of game theory culminated in the first ever chess duel between the program of the Institute of Soviet and American best program developed at Stanford University under the direction of J. McCarthy. By telegraph match was played in four games ended 3-1 in favor of our institute. At the time, chess became a guinea pig for all programmers interested in artificial intelligence.
Sadly, most of the work currently being done on computer chess programs is engineering, not science. For example, the engineering of special-purpose VLSI chips to increase the speed of a chess program only underlines the importance chess programmers attach to speed. In my opinion, conventional computer-chess methods will yield little of further interest to the AI community.
Pruning by analogy is a powerful general-purpose tool and if developed satisfactorily for a perfect information game like chess would almost certainly be applicable to related decision-tree searches...
It is remarkable that no significant improvement has been made to that method, despite the passage of 15 years. Not even attempts to implement simple forms of the idea in serious chess programs.
Psychological evidence indicates that human chess players search very few positions, and base their positional assessments on structural/perceptual patterns learned through experience.
The main objectives of the project are to demonstrate capacity of the system to learn, to deepen our understanding of the interaction of knowledge and search, and to build bridges in this area between AI and cognitive science.
Hardware advances have made chess a less fertile ground for addressing the basic issues of AI. The game is small enough that brute-force search techniques have dominated competitive computer chess, and I see little AI interest in squeezing out the last few hundred points on the chess ratings.
Now that computers have reached world-champion level, it is time for chess to become a Drosophila again. Champion-level play is possible with enormously less computation than Deep Blue and its recent competitors use. Tournaments should admit programs only with severe limits on computation. This would concentrate attention on scientific advances. Perhaps a personal computer manufacturer would sponsor a tournament with one second allowed per move on a machine of a single design. Tournaments in which players use computers to check out lines of play would be man-machine collaboration rather than just competition.
Besides AI work aimed at tournament play, particular aspects of the game have illuminated the intellectual mechanisms involved. Barbara Liskov demonstrated that what chess books teach about how to win certain endgames is not a program but more like a predicate comparing two positions to see if one is an improvement on the other. Such qualitative comparisons are an important feature of human intelligence and are needed for AI. Donald Michie, Ivan Bratko, Alen Shapiro, David Wilkins, and others have also used chess as a Drosophila to study intelligence. Newborn ignores this work, because it is not oriented to tournament play.
Making Computer Chess Scientific
A further note by John McCarthy from Making Computer Chess Scientific[20] :
AI has two tools for tackling problems. One is to use methods observed in humans, often observed only by introspection, and the other is to invent methods using ideas of computer science without worrying about whether humans do it this way. Chess programming employs both. Introspection is an unreliable way of determining how humans think, but introspectively suggested methods are valid as AI if they work.
Much of the mental computation done by chess players is invisible to the player and to outside observers. Patterns in the position suggest what lines of play to look at, and the pattern recognition processes in the human mind seem to be invisible to that mind. However, the parts of the move tree that are examined are consciously accessible.
It is an important advantage of chess as a Drosophila for AI that so much of the thought that goes into human chess play is visible to the player and even to spectators. When chess players argue about what is the right move in a position, they follow out lines of play, i.e. argue explicitly about parts of the move tree. Moreover, when a player is found to have made a mistake, it is almost always a failure to follow out a certain line of play rather than a misevaluation of a final position.
There is no significant difference between an alpha-beta search with heavy LMR and a static evaluator (current state of the art in chess) and an UCT searcher with a small exploration constant that does playouts (state of the art in go).
The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.
Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.
So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans.
For many years Chess (and perhaps more recently Go) has served as the Drosophila of AI research. Decades of research culminated in the defeat of Garry Kasparov by DEEP BLUE in May 1997. There is still an active research community that uses Chess as a test-bed for AI research (as seen in this journal), but the game is limited in the types of challenges that it can offer to the AI researcher. Being a game of perfect information (both players know the full state of the game at any given point) with a relatively small branching factor, researchers have reduced the challenge of building a strong AI for Chess to merely one of deep brute-force search. The research challenges are to create a good evaluation function, and to design an effective search algorithm. This “solution” to Chess is unappealing to many AI purists. Nevertheless, alternative AI approaches have been largely ineffective.
Poker, as an experimental test-bed for exploring AI, is a much richer domain than Chess (and Go).
Imperfect information. Parts of the game state (opponent hands) are not known.
Multiple players. Many popular poker variants can be played with up to 10 players.
Stochastic. The dealing of the cards adds a random element to the game.
Deception. Predictable play can be exploited by an opponent. Hence, deceptive play is an essential ingredient of strong play (e.g., bluffing).
Opponent modelling. Observing your opponent(s) and adjusting your play to exploit (perceived) opponent tendencies is necessary to maximize poker winnings.
Information sparsity. Many poker hands end in the players not revealing their cards. This limits the amount of data available to learn from.
Microsoft was founded about 25 years ago, and I can remember at the time thinking, "Well, if I go out and do this really commercial stuff, I’m going to miss these big advances in AI that will be coming very soon." (Laughter.) And so I come from the school of AI optimist. You know, I can remember being at Harvard and back then AI was the Greenblatt Chess Program and Maxima and Eliza and people literally felt that within five to ten years that some of these tough problems would be solved.
Allen Newell (1955). The Chess Machine: An Example of Dealing with a Complex Task by Adaptation. Proceedings Western Joint Computer Conference, pp. 101-108. Reprinted (1988) in Computer Chess Compendium
Allen Newell, Cliff Shaw, Herbert Simon (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing, pp. 256-264 [25]
John Henry Holland (1975). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. amazon.com
A. Harry Klopf (1982). The Hedonistic Neuron: A Theory of Memory, Learning, and Intelligence. Hemisphere Publishing Corporation, University of Michigan
Danny Kopec, Donald Michie (1983). Mismatch between machine representations and human concepts: dangers and remedies. FAST series No. 9 report. European Community, Brussels.
Boris Stilman (1994). A Linguistic Geometry for Space Applications, Proc. of the 1994 Goddard Conference on Space Applications of Artificial Intelligence, pp. 87-101, NASA Goddard Space Flight Center, Greenbelt, MD, USA.
Jay Burmeister, Janet Wiles (1995). The Challenge of Go as a Domain for AI Research: A Comparison Between Go and Chess. In Proceedings of the Third Australian and New Zealand Conference on Intelligent Information Systems, IEEE Western Australia Section, pdf
Tony Marsland, Yngvi Björnsson. (1997). From MiniMax to Manhattan. In Deep Blue Versus Kasparov: The Significance for Artificial Intelligence. AAAI Workshop, pp. 31–36, 1997. pdf
John McCarthy (1997). Chess as the Drosophila of AI. Computer Science Department, Stanford University, condensed version of the 1990 paper, pdf
the intelligence of machines and the branch of computer science that aims to create it. While 'machine intelligence' was already mentioned by Alan Turing in the 1940s during his research at Bletchley Park [1] [2] , the term 'artificial intelligence' was coined by John McCarthy in the proposal for the 1956 Dartmouth Conference [3] . In its beginning, Computer Chess was called the Drosophila of Artificial Intelligence. In the 70s, when brute-force programs started to dominate, and competitive and commercial aspects have taken precedence over using chess as a scientific domain, the AI community more and more lost interest in chess [4] . In disagreement with the AI establishment in the 80s, Peter W. Frey concluded, that the AI community should follow computer chess methods rather than the other way around [5] .
Table of Contents
Drosophila of AI
Donald Michie
Quote from I remember Donald Michie by Maarten van Emden [7] :John McCarthy
Quote by John McCarthy from What is Artificial Intelligence? [9] [10] [11]:Anthony Cozzie
Anthony Cozzie in a forum discussion about McCarthy's statement [12] [13] :Secondly, the existence of current amateur and commercial programs does nothing to prevent you from writing whatever kind of chess playing agent you want. If you want to experiment, no one is stopping you or him from applying to NSF for research money and giving it a shot. The existence of "fruit fly races" - and his fruit fly analogy is totally flawed. A better analogy would be that a geneticist decided to make a fruit fly that could run faster than a human - does nothing to prevent casual study of one's own fruit flies.
Heuristic Programming
Remembering Kronrod
Quote from Remembering A.S. Kronrod by Evgenii Landis and Isaak Yaglom [14]:When he started with enthusiasm to program the M2 machine, A.S. Kronrod quickly came to the conclusion that computing is not the main application of computers. The main goal is to teach the computer to think, i.e., what is now called "artificial intelligence" and in those days "heuristic programming".
A.S. Kronrod captivated a large group of mathematicians and physicists (G.M. Adelson Velsky, A.L. Brudno, M.M. Bongard, E.M. Landis, N.N. Konstantinov, and others). Although some of them had arrived at this kind of problems on their own, they unconditionally accepted his leadership. In the room next to the one housing the M2 machine the work of the new Kronrod seminar started. At the gatherings there were heated discussions on pattern recognition problems (this work was led by M.M. Bongard; versions of his program "Kora" are still functioning), transportation problems (the problem was introduced to the seminar and actively worked on by A.L. Brudno), problems of automata theory, and many other problems.
Intellectual Foundations
Quote from Biography AS Kronrod by Alexander Yershov [15]The 12th IJCAI
In a panel discussion at the 12th International Joint Conference on Artificial Intelligence, Robert Levinson, Feng-hsiung Hsu, Tony Marsland, Jonathan Schaeffer, and David Wilkins commented on the relationship between computer chess and AI research [16] . As pointed out by Peter W. Frey, many emphasized the discrepancy between both domains and seemed to lament the inferiour status of the work of computer chess - some excerpts quoted by Frey in his Computer Chess vs. AI paper [17] .Jonathan Schaeffer
Tony Marsland
It is remarkable that no significant improvement has been made to that method, despite the passage of 15 years. Not even attempts to implement simple forms of the idea in serious chess programs.
Robert Levinson
The main objectives of the project are to demonstrate capacity of the system to learn, to deepen our understanding of the interaction of knowledge and search, and to build bridges in this area between AI and cognitive science.
David Wilkins
AI as Sport
John McCarthy from AI as Sport, 1997 [18] , in a review of Monty Newborn's Deep Blue vs. Kasparov [19] :Besides AI work aimed at tournament play, particular aspects of the game have illuminated the intellectual mechanisms involved. Barbara Liskov demonstrated that what chess books teach about how to win certain endgames is not a program but more like a predicate comparing two positions to see if one is an improvement on the other. Such qualitative comparisons are an important feature of human intelligence and are needed for AI. Donald Michie, Ivan Bratko, Alen Shapiro, David Wilkins, and others have also used chess as a Drosophila to study intelligence. Newborn ignores this work, because it is not oriented to tournament play.
Making Computer Chess Scientific
A further note by John McCarthy from Making Computer Chess Scientific [20] :Much of the mental computation done by chess players is invisible to the player and to outside observers. Patterns in the position suggest what lines of play to look at, and the pattern recognition processes in the human mind seem to be invisible to that mind. However, the parts of the move tree that are examined are consciously accessible.
It is an important advantage of chess as a Drosophila for AI that so much of the thought that goes into human chess play is visible to the player and even to spectators. When chess players argue about what is the right move in a position, they follow out lines of play, i.e. argue explicitly about parts of the move tree. Moreover, when a player is found to have made a mistake, it is almost always a failure to follow out a certain line of play rather than a misevaluation of a final position.
Go, the new Drosophila of AI
A quote by Gian-Carlo Pascutto on AI in Go and Chess [21] :The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.
Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.
So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans.
Poker, the next Challenge
Graham Kendall and Jonathan Schaeffer on Poker [22] :Poker, as an experimental test-bed for exploring AI, is a much richer domain than Chess (and Go).
Bill Gates on AI
Remark by Bill Gates on the 17. IJCAI 2001, Seattle, Washington, USA, August 7, 2001 [23] :Subfields
Deep Learning
See also
Selected Publications
1945 ...
1950 ...
1955 ...
1960 ...
1965 ...
1970 ...
1975 ...
1980 ...
1985 ...
1990 ...
- John McCarthy (1990). Chess as the Drosophila of AI. Computers, Chess, and Cognition, pp. 227-237 [31]
- Mikhail Donskoy, Jonathan Schaeffer (1990). Perspectives on Falling from Grace. Computers, Chess, and Cognition
- Yves Kodratoff, Ryszard Michalski (1990). Machine Learning: An Artificial Intelligence Approach, Volume III. Morgan Kaufmann, google books
- Richard Fikes (1990). AI and Software Engineering - Managing Exploratory Programming. AAAI 1990
- Nicholas V. Findler (1990). Contributions to a Computer-Based Theory of Strategies. Springer
1991- Herbert Simon (1991). Artificial Intelligence: Where Has It Been, Where is it Going? IEEE Transactions on Knowledge and Data Engineering, Vol. 3, No. 2
- Robert Levinson, Feng-hsiung Hsu, Tony Marsland, Jonathan Schaeffer, David Wilkins (1991). The Role of Chess in Artificial Intelligence Research. IJCAI 1991, pdf, also in ICCA Journal, Vol. 14, No. 3, pdf
- Peter W. Frey (1991). Memory-Based Expertise: Computer Chess vs. AI. ICCA Journal, Vol. 14, No. 4
- Dinesh Gadwal, Jim Greer, Gordon McCalla (1991). UMRAO: A Chess Endgame Tutor. ARIES Laboratory, Department of Computational Science, University of Saskatchewan, IJCAI-91, pdf
- Robert W. Howard (1991). All about Intelligence: Human, Animal, and Artificial. New South Wales University Press Ltd, amazon.com
1992- Patrick Winston (1992) Artificial Intelligence. (third Edition)
- Peter Norvig (1992). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. Morgan Kaufmann
1993- Helmut Horacek (1993). Computer Chess, its Impact on Artificial Intelligence. ICCA Journal, Vol. 16, No. 1 » WCCC 1992 - Workshop
- Matthew L. Ginsberg (1993). Essentials of artificial intelligence. Morgan Kaufmann Publishers
- Paul S. Rosenbloom, John E. Laird, Allen Newell (1993). The SOAR Papers: Research on Integrated Intelligence. MIT Press, amazon
19941995 ...
- Herbert Simon (1995). Explaining the Ineffable: AI on the Topics of Intuition, Insight and Inspiration. IJCAI 1995, pdf
- Herbert Simon (1995). Artificial Intelligence: An Empirical Science. Artificial Intelligence, Vol. 77, No. 1
- Jacques Pitrat (1995). AI Systems Are Dumb Because AI Researchers Are Too Clever. ACM Computing Surveys, Vol. 27, No. 3
- Edward Feigenbaum, Julian Feldman (eds.) (1995). Computers and Thought. MIT Press
- Jay Burmeister, Janet Wiles (1995). The Challenge of Go as a Domain for AI Research: A Comparison Between Go and Chess. In Proceedings of the Third Australian and New Zealand Conference on Intelligent Information Systems, IEEE Western Australia Section, pdf
1996- Herbert Simon (1996). The Sciences of the Artificial. MIT Press, 3rd Edition 1996, amazon
- Edward A. Feigenbaum (1996) How the “What“ Becomes the “How“. Communications of the ACM, Vol. 39, No. 5, pdf hosted by The Computer History Museum
1997- Tony Marsland, Yngvi Björnsson. (1997). From MiniMax to Manhattan. In Deep Blue Versus Kasparov: The Significance for Artificial Intelligence. AAAI Workshop, pp. 31–36, 1997. pdf
- John McCarthy (1997). Chess as the Drosophila of AI. Computer Science Department, Stanford University, condensed version of the 1990 paper, pdf
- Richard Korf (1997). Does DEEP BLUE use Artificial Intelligence? ICCA Journal, Vol. 20, No. 4 [32]
- John McCarthy (1997). AI as Sport. Science, Vol. 276
- Santos Gerardo Lazzeri, Rachelle Heller (1997). Application of Fuzzy Logic and Case-Based Reasoning to the Generation of High-Level Advice in Chess. Advances in Computer Chess 8
1998- Franz-Günter Winkler, Johannes Fürnkranz (1998). A Hypothesis on the Divergence of AI Research. ICCA Journal, Vol. 21, No. 1, pdf
- Toshinori Munakata (1998). Fundamentals of the New Artificial Intelligence: Beyond Traditional Paradigms. 1st edition, Springer, 2nd edition 2008
19992000 ...
2005 ...
- Dap Hartmann (2005). The True Holy Grail of Artificial Intelligence. ICGA Journal, Vol. 28, No. 1
- David Levy (2005). Robots Unlimited: Life in a Virtual Age. AK Peters
- Marcus Hutter (2005). Universal Artificial Intelligence. Sequential Decisions based on Algorithmic Probability, Springer
- Pieter Spronck (2005). Adaptive Game AI. Ph.D. thesis, Maastricht University, pdf
2006- Azlan Iqbal (2006). Is Aesthetics Computable? ICGA Journal, Vol. 29, No. 1, pdf
2007- Edward Feigenbaum (2007). Happy Silver Anniversary, AI! AI Magazine, Vol. 27, No. 4
- Marvin Minsky (2007). The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon & Schuster [34]
- David Levy (2007). Love and Sex With Robots: The Evolution of Human-Robot Relationships. Harper Collins, amazon.com
2008- Guillaume Chaslot, Sander Bakkes, István Szita, Pieter Spronck (2008). Monte-Carlo Tree Search: A New Framework for Game AI. pdf
- István Szita, Marc Ponsen, Pieter Spronck (2008). Keeping Adaptive Game AI interesting. CGames 2008, pdf draft, pdf
- Brian Schwab (2008). AI Game Engine Programming. Second Edition, amazon » Faile
- Mark Watson (2008). Practical Artificial Intelligence Programming With Java. Third Edition, pdf [35] » Java
- Michael Thielscher (2008). Artificial Intelligence and General Game Playing. Workshop Chess and Mathematics » General Game Playing
- Toshinori Munakata (2008). Fundamentals of the New Artificial Intelligence: Neural, Evolutionary, Fuzzy and More. 2nd edition, Springer, 1st edition 1998
- Brian Bloomfield, Theodore Vurdubakis (2008). IBM's Chess Players: On AI and Its Supplements. The Information Society, Vol. 24, No. 2 » Deep Blue
20092010 ...
- Sander Bakkes (2010). Rapid Adaptation of Video Game AI. Ph.D. thesis, Tilburg University, pdf [38]
- Pieter Spronck (2010). Adaptive Game AI. Tilburg University, pdf
- Paul S. Rosenbloom (2010). An Architectural Approach to Statistical Relational AI. Statistical Relational Artificial Intelligence 2010
2011- Joel Veness (2011). Approximate Universal Artificial Intelligence and Self-Play Learning for Games. Ph.D. thesis, University of New South Wales, supervisors: Kee Siong Ng, Marcus Hutter, Alan Blair, William Uther, John Lloyd; pdf
- Stephen Lucci, Danny Kopec (2011). Artificial Intelligence in the 21st Century. Mercury Learning and Information
- Kevin Warwick (2011). Artificial Intelligence: The Basics. Taylor & Francis
2013- Kieran Greer (2013). Is Intelligence Artificial? arXiv:1403.1076
20142015 ...
- Susan L. Epstein (2015). Wanted: Collaborative Intelligence. Artificial Intelligence, Vol. 221
- Jaap van den Herik (2015). Computers and Intuition. ICGA Journal, Vol. 38, No. 4
2016- Jaap van den Herik (2016). Intuition is Programmable. Valedictory Address from Tilburg University
- Azlan Iqbal, Matej Guid, Simon Colton, Jana Krivec, Shazril Azman, Boshra Haghighi (2016). The Digital Synaptic Neural Substrate: A New Approach to Computational Creativity. arXiv:1507.07058 » Machine Creativity
2017AI Game Programming Wisdom
Forum Posts
External Links
Wikipedia
The abandonment of connectionism in 1969 [41] [42]
Physical symbol system from Wikipedia
Digital organism from Wikipedia
Virtual world from Wikipedia
Second Life from Wikipedia
AI in Media
BBC - Future - The cyborg chess players that can’t be beaten by Chris Baraniuk, December 04, 2015 » David Levy, Boris Alterman, Shay Bushinsky, Mark Lefler
Famous AI Programs
Machine Creativity
Associations
AITopics / HomePage
AITopics / AINews
Journals
ScienceDirect - Artificial Intelligence - Online Access
Artificial Intelligence (journal) from Wikipedia
Online Courses
6,034 Artificial Intelligence, Spring 2005
6.034 Artificial Intelligence, Fall 2010, Video Lectures by Patrick Winston
John McCarthy
Misc
AI on the Web
References
Up one Level