Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2019/12/09): NN-testrun of Lc0 0.23.0 Leelenstein 12.1 Net finished (new best result of a NN-engine, but very close to Lc0 0.22.0 T40B.4-160. But LS 12.1 scored 51.5% vs. Stockfish 190622, which is clearly the best score vs. Stockfish so far). See the result and download the games in the "Lc0 / NN testing"- section.

Next NN-testrun: Lc0 0.23.0 Net 58573+ (("+" means 3 CLOP-tuned parameters by J.Burwitz (CPuct=2.70, FpuValue=0.50, PolicyTemperature=1.53))

2nd NN-testrun: Lc0 0.22.0 LD2+ ("+" means 3 CLOP-tuned parameters by J.Burwitz (CPuct=2.78, FpuValue=0.43, PolicyTemperature=1.87)) still running.

AB-testrun of Xiphos 0.6 still running.

 

 

Blackmageddon Openings released!!! Learn more about them in the "Blackmageddon Openings"- section or download them right here

 

Stay tuned.

 


Stockfish testing

 

Playing conditions:

 

Hardware: i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit, 8GB RAM

Fritzmark: singlethread: 5.3 / 2521 (all engines running on one thread, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.6 mn/s, Stockfish: 2.2 mn/s, Komodo: 2.0 mn/s

Hash: 512MB per engine

GUI: Since 19/09/11: Cutechess-cli (GUI ends game, when a 5-piece endgame is on the board), before: LittleBlitzerGUI (draw at 170 moves, resign at -700cp)

Tablebases: None for engines, 5 Syzygy for cutechess-cli

Openings: HERT_500 testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here)

Ponder, Large Memory Pages & learning: Off

Thinking time: 180''+1000ms (= 3'+1'') per game/engine (average game-duration: around  7.5 minutes). One 5000 games-testrun takes about 7 days.The version-numbers of the Stockfish engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file, written backwards (year,month,day))(example: 170526 = May, 26, 2017). Since July, 2018 I use the abrok-compiles of Stockfish again (http://abrok.eu/stockfish), because they are now much faster than before - now only 1.3% slower than BrainFish-compiles. So, there is no reason anymore to not use these "official" development-compiles.

Download BrainFish (and the Cerebellum-Libraries)here

 

Each Stockfish-version plays 1000 games versus Komodo 13.1, Houdini 6, Fire 7.1, Xiphos 0.5.6, Ethereal 11.53. All engines are running with default-settings.

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest version) and 1x asmFish and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish, asmFish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Libraries of course, because otherwise BrainFish = Stockfish.

 

Latest update: 2019/12/06: Ethereal 11.75 (+25 Elo to Ethereal 11.53)

 

(Ordo-calculation fixed to Stockfish 10 = 3508 Elo)

 

See the individual statistics of engine-results here

See the ORDO-rating of the archive-gamebase since 2019 here

Download the current gamebase here

Download the archive-gamebase since 2019 here

 

     Program                      Elo    +    -   Games   Score   Av.Op.  Draws

   1 BrainFish-2 191007 bmi2    : 3607    9    9  5000    82.1 %   3331   34.2 %
   2 Stockfish 191007 bmi2      : 3557    8    8  5000    77.6 %   3331   39.7 %
   3 Stockfish 191121 bmi2      : 3551    8    8  5000    77.1 %   3331   41.1 %
   4 Stockfish 10 181129        : 3508    4    4 18000    78.0 %   3275   38.1 %
   5 Stockfish 9 180201         : 3457    9    9  5000    74.9 %   3254   41.7 %
   6 Houdini 6 pext             : 3427    3    3 24000    64.1 %   3316   47.7 %
   7 Komodo 13.1 bmi2           : 3410    4    4 14000    57.6 %   3352   48.5 %
   8 Komodo 13.01 bmi2          : 3404    6    6  6000    59.6 %   3332   52.2 %
   9 Komodo 12.3 bmi2           : 3393    6    6  8000    63.2 %   3292   49.8 %
  10 Komodo 13.2 MCTS           : 3315    7    7  6000    44.1 %   3360   54.2 %
  11 Ethereal 11.75 pext        : 3291    7    7  5000    38.4 %   3379   52.9 %
  12 Fire 7.1 popc              : 3278    3    3 24000    44.9 %   3322   51.9 %
  13 Xiphos 0.5.6 bmi2          : 3273    5    5 11000    34.6 %   3400   48.2 %
  14 Ethereal 11.53 pext        : 3266    5    5 14000    37.6 %   3370   48.9 %
  15 Xiphos 0.5.3 bmi2          : 3265    5    5  9000    38.2 %   3355   52.3 %
  16 Komodo 12.3 MCTS           : 3258    6    6  8000    43.7 %   3309   47.1 %
  17 Ethereal 11.25 pext        : 3249    6    6  8000    38.1 %   3342   51.4 %
  18 rofChade 2.2 bmi2          : 3200    8    8  5000    27.8 %   3378   43.9 %
  19 Laser 1.7 bmi2             : 3199    7    7  6000    30.8 %   3352   45.8 %
  20 Fritz 17                   : 3196    7    7  6000    29.4 %   3360   44.2 %
  21 Fizbo 2 bmi2               : 3195    8    8  5000    36.0 %   3307   39.0 %
  22 Shredder 13 x64            : 3191    8    8  6000    31.9 %   3341   42.6 %
  23 Defenchess 2.2 popc        : 3189    8    8  5000    26.6 %   3378   41.8 %
  24 Booot 6.3.1 popc           : 3181    8    8  5000    34.0 %   3310   44.1 %
  25 Andscacs 0.95 popc         : 3149    9    9  5000    23.1 %   3373   35.4 %

The version-numbers (180622 for example) of the engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file. Especially the asmFish-engines are often released much later!!

Below you find a diagram of the progress of Stockfish in my tests since the end of 2018

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).

 

 

 


Sie sind Besucher Nr.