Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2017/09/13): Testrun of Stockfish 8 (latest official release - retested with the new testing conditions) finished. Next testrun: Stockfish 170909 (I use BrainFish 170910 bmi2-compile, because this is the fastest compile of Stockfish 170909 C++ program-code (see my speed-measures below this text)). Result not before next Wednesday.

 

Long thinking-time tournament updated.

 

I measured the speed of Stockfish-compiles (abrok, ultimaiq and BrainFish (without Cerebellum-Library, Brainfish is identical to Stockfish). Stockfish C++ code from 170905, measured with fishbench (10 runs each version), i7-6700HQ 2.6 GHz Skylake CPU. These are the results:

abrok modern    : 1.557 mn/s
abrok bmi2      : 1.611 mn/s

ultimaiq modern : 1.660 mn/s
ultimaiq bmi2   : 1.702 mn/s

brainfish modern: 1.729 mn/s
brainfish bmi2  : 1.764 mn/s


modern:
abrok -> ultimaiq = +6.6% speedup
ultimaiq -> brainfish = +4.2% speedup

 

bmi2:
abrok -> ultimaiq = +5.6% speedup
ultimaiq -> brainfish = +3.6% speedup

 

So, the ultimaiq-compiles are around 6% faster than the abrok-compiles, but BrainFish is around 10% faster than abrok!!! From now, I will use the BrainFish-compiles (without Cerebellum-Library) for my Stockfish-testruns, because these are the fastest compiles at the moment and the results are better comparable with the BrainFish-testruns, when BrainFish uses the Cerebellum-Library.

 

Stay tuned.


Stockfish testing

 

Playing conditions:

 

Hardware: i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit, 8GB RAM

Fritzmark: singlecore: 5.3 / 2521 (all engines running on one core, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.6 mn/s, Stockfish: 2.2 mn/s, Komodo: 2.0 mn/s

Hash: 512MB per engine

GUI: LittleBlitzerGUI (draw at 130 moves, resign at 400cp (for 4 moves))

Tablebases: None

Openings: HERT testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here)(I use a version of HERT, where the positions in the file are ordered in a different way - makes no difference for testing-results, dont be confused, when you download my gamebase-file and the game-sequence doesnt match with the sequence of your HERT-set...)

Ponder, Large Memory Pages & learning: Off

Thinking time: 180''+1000ms (= 3'+1'') per game/engine (average game-duration: around  7.5 minutes). One 5000 games-testrun takes about 7 days.The version-numbers of the Stockfish-development engines are the release-date, written backwards (year,month,day))(example: 170526 = May, 26, 2017). I use BrainFish-compiles (bmi2) by Thomas Zipproth (without using the Cerebellum-Library, BrainFish is identical to Stockfish and BrainFish-compiles are the fastest compiles of the Stockfish C++ code at the moment, around +10% faster than the abrok.eu-compiles and around 4% faster than the ultimaiq-compiles). Download BrainFish (and the additional Cerebellum-Library): here

 

Each Stockfish-version plays 1000 games against Komodo 11.2.2, Houdini 5, Shredder 13, Fizbo 1.9, Andscacs 0.91b. All engines are running with default-settings.

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest version) and 1x asmFish and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish, asmFish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Library of course, because otherwise BrainFish = Stockfish.

 

Latest update: 2017/09/13: Stockfish 8 (retested with new testing conditions)

 

(Ordo-calculation fixed to Stockfish 8 = 3396 Elo (this value was chosen, so that Stockfish 170526 had the same Elo-result with the new testing conditions as it had with the old conditions. So, there is no "break" in the Elo-progress in the diagram below...)

 

See the individual statistics of engine-results here

Download the current gamebase here

Download the gamebase-archive (all played games with the HERT-set) here

 

     Program                      Elo    +    -   Games   Score   Av.Op.  Draws

   1 BrainFish 170826 bmi2      : 3467    7    7  5000    76.1 %   3246   41.8 %
   2 asmFish 170819 bmi2        : 3427    7    7  5000    72.1 %   3246   46.4 %
   3 Stockfish 170831 bmi2      : 3419    7    7  5000    71.2 %   3246   47.4 %
   4 Stockfish 8 161101 bmi2    : 3396    7    7  5000    68.7 %   3246   49.9 % (retest)
   5 Komodo 11.2.2 x64          : 3381    5    5  8000    57.3 %   3320   52.7 %
   6 Houdini 5 pext             : 3368    5    5  8000    55.6 %   3321   55.0 %
   7 Shredder 13 x64            : 3198    6    6  8000    31.9 %   3342   42.5 %
   8 Fizbo 1.9 bmi2             : 3177    6    6  8000    29.2 %   3345   37.1 %
   9 Andscacs 0.91b bmi2        : 3105    6    6  8000    20.9 %   3354   32.0 %

Below you find a diagram of the progress of Stockfish in my tests since the end of 2016

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).


Sie sind Besucher Nr.