Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2018/07/15): Long thinking-time testrun of LC0 Net 473 finished. +16 Elo to Net 452 (only 8 days older!) - wow. Leela climbs up very fast! Check out the results and download the games in the "Long thinking-time LC Zero"-section. And enjoy 20 new short and spectacular wins of Leela online with the pgn-replayer in the "View LC Zero games"-section...

 

Testrun of asmFish 180503 (=Stockfish-Patch-date (the release-date of asmFish.exe was 180705)) still running. Result not before Saturday.

 

My SALC V5 openings and books are ready for download. Check out the "SALC openings"-section on this website for further information. Download SALC V5.02 here

 

Stay tuned.


Stockfish testing

 

Playing conditions:

 

Hardware: i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit, 8GB RAM

Fritzmark: singlecore: 5.3 / 2521 (all engines running on one core, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.6 mn/s, Stockfish: 2.2 mn/s, Komodo: 2.0 mn/s

Hash: 512MB per engine

GUI: LittleBlitzerGUI (draw at 130 moves, resign at 400cp (for 4 moves))

Tablebases: None

Openings: HERT testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here)(I use a version of HERT, where the positions in the file are ordered in a different way - makes no difference for testing-results, dont be confused, when you download my gamebase-file and the game-sequence doesnt match with the sequence of your HERT-set...)

Ponder, Large Memory Pages & learning: Off

Thinking time: 180''+1000ms (= 3'+1'') per game/engine (average game-duration: around  7.5 minutes). One 5000 games-testrun takes about 7 days.The version-numbers of the Stockfish engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file, written backwards (year,month,day))(example: 170526 = May, 26, 2017). I use BrainFish-compiles (bmi2) by Thomas Zipproth (without using the Cerebellum-Library, BrainFish is identical to Stockfish and BrainFish-compiles are the fastest compiles of the Stockfish C++ code at the moment, around +10% faster than the abrok.eu-compiles and around 4% faster than the ultimaiq-compiles).

Download BrainFish (and the Cerebellum-Library): here

 

Each Stockfish-version plays 1000 games versus Komodo 12, Houdini 6, Fire 7.1, Shredder 13, Fizbo 2. All engines are running with default-settings, except: Move Overhead is set to 300ms, if an engine allows to do so.

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest version) and 1x asmFish and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish, asmFish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Library of course, because otherwise BrainFish = Stockfish.

 

Latest update: 2018/07/13: asmFish 180503 (Release-date: 180705)

 

(Ordo-calculation fixed to Stockfish 9 = 3450 Elo)

 

See the individual statistics of engine-results here

Download the current gamebase here

Download the archive (all played games with HERT (215000 games)) here

See a ORDO-rating of the complete HERT-archive-base here

 

     Program                    Elo    +    -   Games   Score   Av.Op.  Draws

   1 BrainFish 180613 bmi2    : 3523    8    8  5000    77.1 %   3296   40.6 %
   2 asmBrainFish 9           : 3517    8    8  5000    77.7 %   3282   37.9 %
   3 asmFish 180503 bmi2      : 3482    8    8  5000    73.0 %   3296   44.1 % (new)
   4 Stockfish 180622 bmi2    : 3478    8    8  5000    72.5 %   3296   45.9 %
   5 Stockfish 9 180201       : 3450    6    6  7000    69.3 %   3297   48.0 %
   6 Houdini 6 pext           : 3424    5    5 11000    56.1 %   3373   54.2 %
   7 Komodo 11.3.1 bmi2       : 3394    6    6  6000    57.8 %   3331   50.9 %
   8 Komodo 12 bmi2           : 3392    5    5  9000    54.0 %   3358   53.6 %
   9 Fire 7.1 popc            : 3277    6    6  8000    35.3 %   3392   47.4 %
  10 Fire 6.1 popc            : 3207    6    6  7000    30.4 %   3366   42.5 %
  11 Fizbo 2 bmi2             : 3198    5    5 11000    26.4 %   3394   34.8 %
  12 Shredder 13 x64          : 3188    5    5 11000    25.3 %   3395   38.6 %

 

The 4 different Fishes in this Elo-list:

 

- Stockfish

- asmFish = Stockfish manually rewritten in assembler (look here)

- BrainFish = Stockfish playing with Cerebellum-Library by Thomas Zipproth (look here)

- asmBrainFish = asmFish playing with Cerebellum-Library

 

The version-numbers (180622 for example) of the engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file. Especially the asmFish-engines are often released much later!!

Below you find a diagram of the progress of Stockfish in my tests since the end of 2016

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).


Sie sind Besucher Nr.