Home * Automated Tuning * RuyTune

an open source framework for tuning evaluation function parameters, written by Álvaro Begué in C++, released on Bitbucket [1] as introduced in November 2016 [2]. RuyTune is an instance of a logistic regression performing a limited-memory BFGS, a quasi-Newton method that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm with limited amount of memory. It uses the libLBFGS library [3] along with reverse-mode automatic differentiation and requires that the evaluation function is converted to a C++ template function where the score type is a template parameter, and a database of quiescent positions with associated results [4].


The function to minimize the mean squared error of the prediction is:
  • N is the number of test positions.
  • Ri is the result of the game corresponding to position i; -1* for black win, 0 for draw and +1 for white win.
  • qi is corresponding to position i, the value returned by the chess engine evaluation function. (Computing the gradient on the QS is a waste of time - it is much faster to run the QS saving the PV and then compute the gradient using the evaluation function of the end-of-PV position - and not worry too much about the fact that tweaking the evaluation function could result in a different position being picked [5]).
  • Sigmoid is implemented by hyperbolic tangent to convert centipawn scores into an expected result in [-1,1] [6]. tanh.jpg

See also

Forum Posts

External Links


  1. ^ alonamaloh / ruy_tune — Bitbucket
  2. ^ C++ code for tuning evaluation function parameters by Álvaro Begué, CCC, November 10, 2016
  3. ^ libLBFGS: L-BFGS library written in C
  4. ^ A database for learning evaluation functions by Álvaro Begué, CCC, October 28, 2016
  5. ^ Re: Texel tuning method question by Álvaro Begué, CCC, June 07, 2017
  6. ^ tanh(0.43s) , s=-10 to 10 pawnunit plot by Wolfram Alpha

What links here?

Up one Level