Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2019/11/14): AB-Testrun finished: BrainFish-2 191007 (same patch-level as the best Stockfish in my testings so far) with brand new Cerebellums (Leela-Cerebellum Release 3 and Stockfish-Cerebellum Release 175). +50 Elo to Stockfish 191007. Download the new Cerebellums here

Next AB-Testrun: Fritz 17.

NN-Testruns of Fat Fritz 1.0 and Lc0 0.22.0 with J20-40 Net (T40 training-games, but smaller size 192x16) still running. 

All results not before the beginning of the next week...

 

 

Stay tuned.

 


Stockfish testing

 

Playing conditions:

 

Hardware: i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit, 8GB RAM

Fritzmark: singlethread: 5.3 / 2521 (all engines running on one thread, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.6 mn/s, Stockfish: 2.2 mn/s, Komodo: 2.0 mn/s

Hash: 512MB per engine

GUI: Since 19/09/11: Cutechess-cli (GUI ends game, when a 5-piece endgame is on the board), before: LittleBlitzerGUI (draw at 170 moves, resign at -700cp)

Tablebases: None for engines, 5 Syzygy for cutechess-cli

Openings: HERT_500 testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here)

Ponder, Large Memory Pages & learning: Off

Thinking time: 180''+1000ms (= 3'+1'') per game/engine (average game-duration: around  7.5 minutes). One 5000 games-testrun takes about 7 days.The version-numbers of the Stockfish engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file, written backwards (year,month,day))(example: 170526 = May, 26, 2017). Since July, 2018 I use the abrok-compiles of Stockfish again (http://abrok.eu/stockfish), because they are now much faster than before - now only 1.3% slower than BrainFish-compiles. So, there is no reason anymore to not use these "official" development-compiles.

Download BrainFish (and the Cerebellum-Libraries)here

 

Each Stockfish-version plays 1000 games versus Komodo 13.1, Houdini 6, Fire 7.1, Xiphos 0.5.6, Ethereal 11.53. All engines are running with default-settings.

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest version) and 1x asmFish and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish, asmFish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Libraries of course, because otherwise BrainFish = Stockfish.

 

Latest update: 2019/11/14: BrainFish-2 191007 (+50 Elo to Stockfish 191007)

 

(Ordo-calculation fixed to Stockfish 10 = 3508 Elo)

 

See the individual statistics of engine-results here

See the ORDO-rating of the archive-gamebase since 2019 here

Download the current gamebase here

Download the archive-gamebase since 2019 here

 

     Program                      Elo    +    -   Games   Score   Av.Op.  Draws

   1 BrainFish-2 191007 bmi2    : 3607    9    9  5000    82.1 %   3331   34.2 %
   2 Stockfish 191007 bmi2      : 3557    8    8  5000    77.6 %   3331   39.7 %
   3 Stockfish 191020 bmi2      : 3552    8    8  5000    77.2 %   3331   40.9 %
   4 Stockfish 10 181129        : 3508    5    5 16000    77.6 %   3279   38.8 %
   5 Stockfish 9 180201         : 3457    8    8  5000    74.9 %   3254   41.7 %
   6 Houdini 6 pext             : 3426    4    4 22000    63.1 %   3322   48.3 %
   7 Komodo 13.1 bmi2           : 3408    5    5 12000    55.0 %   3370   49.4 %
   8 Komodo 13.01 bmi2          : 3404    7    7  6000    59.6 %   3332   52.2 %
   9 Komodo 12.3 bmi2           : 3393    6    6  8000    63.2 %   3291   49.8 %
  10 Komodo 13.2 MCTS           : 3315    6    6  6000    44.1 %   3360   54.2 %
  11 Fire 7.1 popc              : 3278    3    3 22000    44.0 %   3329   51.2 %
  12 Xiphos 0.5.6 bmi2          : 3275    6    6  9000    30.4 %   3435   45.5 %
  13 Ethereal 11.53 pext        : 3267    5    5 13000    36.0 %   3383   48.2 %
  14 Xiphos 0.5.3 bmi2          : 3265    6    6  9000    38.2 %   3355   52.3 %
  15 Komodo 12.3 MCTS           : 3258    6    6  8000    43.7 %   3308   47.1 %
  16 Ethereal 11.25 pext        : 3249    6    6  8000    38.1 %   3341   51.4 %
  17 rofChade 2.2 bmi2          : 3200    8    8  5000    27.8 %   3377   43.9 %
  18 Laser 1.7 bmi2             : 3199    7    7  6000    30.8 %   3352   45.8 %
  19 Fizbo 2 bmi2               : 3194    8    8  5000    36.0 %   3306   39.0 %
  20 Shredder 13 x64            : 3191    8    8  6000    31.9 %   3341   42.6 %
  21 Defenchess 2.2 popc        : 3188    8    8  5000    26.6 %   3377   41.8 %
  22 Booot 6.3.1 popc           : 3181    8    8  5000    34.0 %   3309   44.1 %
  23 Andscacs 0.95 popc         : 3149    9    9  5000    23.1 %   3373   35.4 %

The version-numbers (180622 for example) of the engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file. Especially the asmFish-engines are often released much later!!

Below you find a diagram of the progress of Stockfish in my tests since the end of 2018

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).

 

 

 


Sie sind Besucher Nr.