Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2018/06/15): Long thinking-time testrun of LC Zero 180602 Net 374 is finished. Results and games in the Long thinking-time section.

Testrun of Stockfish 180606 finished. Next testrun: Brainfish 180613 (playing with Cerebellum Light Release 170). Result not before next Saturday.

 

The new default-settings of LC Zero since 180604 are the Clop-tuned settings by Albert Silver. They should be stronger, than the default-settings of LC Zero 180602, which I believe, to be the strongest settings, found so far.

To be sure about that, I will repeat the latest testrun (LC Zero 180602, Net 374) with LC Zero 180609, Net 374:

LC0 180602 played default with the settings FPU=0.2, Cpuct=3.1, Policy Softmax=1.0 and LC0 180609 plays default with Albert Silvers Clop-tuned settings (FPU=0.9, Cpuct=3.4, Policy Softmax=2.2), which should be stronger. I doubt that. But, when the testrun is finished, we will be sure (300 games with 12'+5'' vs. 10 opponents).

 

My new SALC V5 openings and books are ready for download. Check out the "SALC openings"-section on this website for further information. Download SALC V5.02 here

 

Stay tuned.


Stockfish testing

 

Playing conditions:

 

Hardware: i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit, 8GB RAM

Fritzmark: singlecore: 5.3 / 2521 (all engines running on one core, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.6 mn/s, Stockfish: 2.2 mn/s, Komodo: 2.0 mn/s

Hash: 512MB per engine

GUI: LittleBlitzerGUI (draw at 130 moves, resign at 400cp (for 4 moves))

Tablebases: None

Openings: HERT testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here)(I use a version of HERT, where the positions in the file are ordered in a different way - makes no difference for testing-results, dont be confused, when you download my gamebase-file and the game-sequence doesnt match with the sequence of your HERT-set...)

Ponder, Large Memory Pages & learning: Off

Thinking time: 180''+1000ms (= 3'+1'') per game/engine (average game-duration: around  7.5 minutes). One 5000 games-testrun takes about 7 days.The version-numbers of the Stockfish-development engines are the release-date, written backwards (year,month,day))(example: 170526 = May, 26, 2017). I use BrainFish-compiles (bmi2) by Thomas Zipproth (without using the Cerebellum-Library, BrainFish is identical to Stockfish and BrainFish-compiles are the fastest compiles of the Stockfish C++ code at the moment, around +10% faster than the abrok.eu-compiles and around 4% faster than the ultimaiq-compiles).

Download BrainFish (and the Cerebellum-Library): here

 

Each Stockfish-version plays 1000 games versus Komodo 12, Houdini 6, Fire 7.1, Shredder 13, Fizbo 2. All engines are running with default-settings, except: Move Overhead is set to 300ms, if an engine allows to do so.

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest version) and 1x asmFish and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish, asmFish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Library of course, because otherwise BrainFish = Stockfish.

 

Latest update: 2018/06/15: Stockfish 180606

 

(Ordo-calculation fixed to Stockfish 9 = 3450 Elo)

 

See the individual statistics of engine-results here

Download the current gamebase here

Download the archive (all played games with HERT (200000 games)) here

See a ORDO-rating of the complete HERT-archive-base here

 

     Program                    Elo    +    -   Games   Score   Av.Op.  Draws

   1 asmBrainFish 9           : 3516    8    8  5000    77.7 %   3282   37.9 %
   2 BrainFish 180423 bmi2    : 3501    8    8  5000    76.2 %   3282   42.2 %
   3 Stockfish 180606 bmi2    : 3473    7    7  5000    71.9 %   3297   46.2 % (new)
   4 asmFish 9 bmi2           : 3467    8    8  5000    72.8 %   3282   43.3 %
   5 Stockfish 9 180201       : 3450    6    6  7000    69.3 %   3297   48.0 %
   6 Houdini 6 pext           : 3423    5    5 11000    56.6 %   3369   54.8 %
   7 Komodo 12 bmi2           : 3393    6    6  7000    59.6 %   3317   52.7 %
   8 Komodo 11.3.1 bmi2       : 3390    6    6  8000    52.2 %   3369   52.5 %
   9 Fire 7.1 popc            : 3280    7    7  6000    40.3 %   3354   50.3 %
  10 Fire 6.1 popc            : 3206    6    6  9000    27.3 %   3392   40.0 %
  11 Fizbo 2 bmi2             : 3197    5    5 11000    26.6 %   3390   34.8 %
  12 Shredder 13 x64          : 3191    5    5 11000    25.8 %   3391   39.6 %

 

The 4 different Fishes in this Elo-list:

 

- Stockfish

- asmFish = Stockfish manually rewritten in assembler (look here)

- BrainFish = Stockfish playing with Cerebellum-Library by Thomas Zipproth (look here)

- asmBrainFish = asmFish playing with Cerebellum-Library

Below you find a diagram of the progress of Stockfish in my tests since the end of 2016

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).


Sie sind Besucher Nr.