Home * Games * Go

The game of Go has attracted game researchers and programmers as an ambitious AI-challenge. Albert Zobrist was a pioneer, who wrote the first Go program in 1968 as part of his Ph.D. Thesis on pattern recognition [1]. Chess programmers, beside others, Rémi Coulom and Gian-Carlo Pascutto became successful Go programmers with their programs CrazyStone and Leela respectively. Competitive computer Go, as organized by the ICGA [2], is played on boards with 9x9 as well with default 19x19 grids.
Since Go lacks a simple evaluation function mainly based on counting material, attempts to apply similar techniques and algorithms as in chess were less successful. The breakthrough in computer Go was accomplished by Monte-Carlo tree search and deep learning.
19*19 Go board [3]


Monte-Carlo Go

After early trials to apply Monte Carlo methods to a Go playing program by Bernd Brügmann in 1993 [4], recent developments since the mid 2000s by Bruno Bouzy [5], and by Rémi Coulom, who coined the term Monte-Carlo Tree Search [6], in conjunction with UCT (Upper Confidence bounds applied to Trees) introduced by Levente Kocsis and Csaba Szepesvári [7], led to a breakthrough in computer Go [8].


As mentioned by Ilya Sutskever and Vinod Nair in 2008 [9], convolutional neural networks are well suited for problems with a natural translation invariance, such as object recognition. Go has some translation invariance, because if all the pieces on a hypothetical Go board are shifted to the left, then the best move will also shift (with the exception of pieces that are on the boundary of the board). Many applications of neural networks to Go have already used convolutional neural networks, such as Nicol N. Schraudolph et al. [10], Erik van der Werf et al. [11], and Markus Enzenberger [12], among others.

In 2014, two teams independently investigated whether deep convolutional neural networks [13] could be used to directly represent and learn a move evaluation function for the game of Go. Christopher Clark and Amos Storkey trained an 8-layer convolutional neural network by supervised learning from a database of human professional games, which without any search, defeated the traditional search program Gnu Go in 86% of the games [14] [15] [16] [17] [18]. In their paper Move Evaluation in Go Using Deep Convolutional Neural Networks [19], Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver report they trained a large 12-layer convolutional neural network in a similar way, to beat Gnu Go in 97% of the games, and matched the performance of a state-of-the-art Monte-Carlo Tree Search that simulates a million positions per move [20].


In 2015, a team affiliated with Google DeepMind around David Silver, Aja Huang, Chris J. Maddison, and Demis Hassabis, supported by Google researchers John Nham and Ilya Sutskever, build a Go playing program dubbed AlphaGo, combining Monte-Carlo tree search with their 12-layer networks [21], the “policy network,” to select the next move, the “value network,” to predict the winner of the game. The neural networks were trained on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. AlphaGo achieved a huge winning rate against other Go programs, and defeated European Go champion Fan Hui [22] in October 2015 with a 5 - 0 score [23] [24]. On March 9 to 15, 2016, AlphaGo won a $1M 5-game challenge match in Seoul versus Lee Sedol with 4 - 1 [25] [26] [27].

During The Future of Go Summit from May 23 to 27, 2017 in Wuzhen, China, AlphaGo won a three-game match versus current world No. 1 ranking player Ke Ji. After the Summit, AlphaGo is now retired from competitive play while DeepMind continues AI research in other areas [28].

AlphaGo Zero & AlphaZero

However, in October 2017, AlphaGo Zero, an evolution of AlphaGo was introduced. While previous versions were initially trained on thousands of human amateur and professional games to learn how to play Go, AlphaGo Zero learns exclusively by playing games against itself, starting from completely random play, to quickly surpass human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0 [29] [30]. AlphaGo Zero was further improved and even generalized for other games now dubbed AlphaZero, as published in December 2017 [31].

Fine Art

Fine Art is a Go playing entity developed since 2016 under the patronage of the Chinese media company Tencent by a team around Liu Yongsheng, along with Ma Bo, Tang Shanmin, Wu Guangyu, and Zhang Kaixu. It won the Computer Go UEC Cup at the University of Electro-Communications, Chōfu, Tokyo, Japan, in March 2017 against a field of 27 other programs including DeepZenGo and Crazy Stone [32]. In January 2018, it defeated Ke Jie 9P in 77 moves after giving two stones handicap [33] on Fox Weiqi [34] server [35] [36].


Quote by Gian-Carlo Pascutto in 2010 [37]:
There is no significant difference between an alpha-beta search with heavy LMR and a static evaluator (current state of the art in chess) and an UCT searcher with a small exploration constant that does playouts (state of the art in go).

The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.

Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.

So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans.

Computer Olympiads

See also




Videos on Go

Selected Publications


1960 ...

1970 ...

1980 ...



1995 ...


2000 ...


2005 ...


2010 ...


2015 ...


Forum Posts

2005 ...

2010 ...

2015 ...


External Links


Computer Go Archives

Computer Go Pages

Open Source


Go Challenge


Fine Art


  1. ^ Albert Zobrist (1970). Feature Extraction and Representation for Pattern Recognition and the Game of Go. Ph.D. thesis , University of Wisconsin, also published as technical report, pdf
  2. ^ Go at the Computer Olympiad
  3. ^ Go (Spiel) from Wikipedia.de
  4. ^ Bernd Brügmann (1993). Monte Carlo Go. pdf
  5. ^ Bruno Bouzy (2005). Associating domain-dependent knowledge and Monte Carlo approaches within a go program. Information Sciences, Heuristic Search and Computer Game Playing IV
  6. ^ Rémi Coulom (2006). Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search. CG 2006, pdf
  7. ^ Levente Kocsis, Csaba Szepesvári (2006). Bandit based Monte-Carlo Planning. ECML-06, LNCS/LNAI 4212, pdf
  8. ^ Sylvain Gelly, Marc Schoenauer, Michèle Sebag, Olivier Teytaud, Levente Kocsis, David Silver, Csaba Szepesvári (2012). The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions. Communications of the ACM, Vol. 55, No. 3, pdf preprint
  9. ^ Ilya Sutskever, Vinod Nair (2008). Mimicking Go Experts with Convolutional Neural Networks. ICANN 2008, pdf
  10. ^ Nicol N. Schraudolph, Peter Dayan, Terrence J. Sejnowski (1994). Temporal Difference Learning of Position Evaluation in the Game of Go. Advances in Neural Information Processing Systems 6
  11. ^ Erik van der Werf, Jos Uiterwijk, Eric Postma, Jaap van den Herik (2002). Local Move Prediction in Go. CG 2002
  12. ^ Markus Enzenberger (2003). Evaluation in Go by a Neural Network using Soft Segmentation. Advances in Computer Games 10, pdf
  13. ^ Convolutional neural network from Wikipedia
  14. ^ Christopher Clark, Amos Storkey (2014). Teaching Deep Convolutional Neural Networks to Play Go. arXiv:1412.3409
  15. ^ Deep learning for… Go by Erik Bernhardsson, December 11, 2014
  16. ^ Teaching Deep Convolutional Neural Networks to Play Go by Hiroshi Yamashita, The Computer-go Archives, December 14, 2014
  17. ^ Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time | MIT Technology Review, December 15, 2014
  18. ^ Teaching Deep Convolutional Neural Networks to Play Go by Michel Van den Bergh, CCC, December 16, 2014
  19. ^ Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver (2014). Move Evaluation in Go Using Deep Convolutional Neural Networks. arXiv:1412.6564v1
  20. ^ Move Evaluation in Go Using Deep Convolutional Neural Networks by Aja Huang, The Computer-go Archives, December 19, 2014
  21. ^ David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529
  22. ^ Fan Hui at Sensei's Library
  23. ^ Game Over? AlphaGo Beats Pro 5-0 in Major AI Advance « American Go E-Journal, January 27, 2016
  24. ^ Official Google Blog: AlphaGo: using machine learning to master the ancient game of Go by Demis Hassabis, January 27, 2016
    Google DeepMind: Ground-breaking AlphaGo masters the game of Go, YouTube Video
  25. ^ DeepMind - YouTube Channel
  26. ^ Video Interview with Rémi Coulom on AlphaGo, February 2016
  27. ^ Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol, BBC News, March 12, 2016
  28. ^ AlphaGo’s Designers Explore New AI After Winning Big in China by Cade Metz, Wired, May 27, 2017
  29. ^ AlphaGo Zero: Learning from scratch by Demis Hassabis and David Silver, DeepMind, October 18, 2017
  30. ^ David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis (2017). Mastering the game of Go without human knowledge. Nature, Vol. 550
  31. ^ David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815
  32. ^ Fine Art (software) from Wikipedia
  33. ^ Two stones! Fine Art defeated Ke Jie 9P after giving two stones handicap. – Website of The International Go Federation, January 19, 2018
  34. ^ Go Servers at Sensei's Library - Fox Weiqi
  35. ^ Breakthrough: Fine Art beating Ke Jie with 2 Handicap Stones by Ingo Althöfer, Computer Go Archive, January 20, 2018
  36. ^ 7% Documentary: Behind the scenes of Fine Art AI - 纪录片《7%》:揭秘人工智能“绝艺”夺冠幕后 腾讯网 - English subtitles, YouTube Video
    featuring Fine Art team Liu Yongsheng, Ma Bo, Tang Shanmin, Wu Guangyu and Zhang Kaixu
    further Rémi Coulom, Simon Viennot, I-Chen Wu, Hideki Kato, David Fotland, Shun-Chin Hsu et al. at the Computer Go UEC Cup 2017
  37. ^ Re: Chess vs Go // AI vs IA by Gian-Carlo Pascutto, June 02, 2010
  38. ^ Computer Go Bibliography, University of Alberta
  39. ^ GoTools - TsumeGo Solving Software
  40. ^ Gobble
  41. ^ Mathematical Go from Sensei's Library
  42. ^ Nici Schraudolph’s go networks, review by Jay Scott
  43. ^ EZ-GO at Sensei's Library
  44. ^ Tsumego at Sensei's Library
  45. ^ steganography from Wikipedia
  46. ^ The Shodan Go Bet
  47. ^ Re: Teaching Deep Convolutional Neural Networks to Play Go by Erik van der Werf, The Computer-go Archives, December 15, 2014
  48. ^ Capturing race from Wikipedia
  49. ^ Franz-Josef Dickhut from Wikipedia, Rémi Coulom
  50. ^ codecentric go challenge 2014: Interviews with Franz-Josef Dickhut and Rémi Coulom - codecentric Blog by Raymond Georg Snatzke, October 1, 2014
  51. ^ codecentric go challenge 2014: Final Interviews - codecentric Blog by Raymond Georg Snatzke, November 27, 2014 (German)
  52. ^ How Facebook’s AI Researchers Built a Game-Changing Go Engine | MIT Technology Review, December 04, 2015
  53. ^ Combining Neural Networks and Search techniques (GO) by Michael Babigian, CCC, December 08, 2015
  54. ^ Re: Minmax backup operator for MCTS by Brahim Hamadicharef, CCC, December 30, 2017
  55. ^ The Mystery of Go, the Ancient Game That Computers Still Can’t Win by Alan Levinovitz, Wired, May 12, 2014
  56. ^ Wired Article on Computer GO by Edmund Moshammer, CCC, May 13, 2014
  57. ^ World #1 Go Player Ke Jie accepts Google Alpha Go Match.. by AA Ross, CCC, June 07, 2016
  58. ^ Ke Jie from Wikipedia

What links here?

Up one Level