Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2022/09/25): Ratinglist-testrun of Caissa 1.0: +61 Elo to Caissa 0.9. Good progress. Very small progress, too, in my EAS-Ratinglist.

 

NN-testrun of Lc0 0.30dev 784968 (Lc0 binary and net are the same, that were sent to TCEC for playing the premier division in TCEC 23) finished: See the result and download the games in the NN vs Dragon testing section.

 

 

Stay tuned.


Stockfish VLTC UHO Regression testing (2000 games (10min+3sec) vs Stockfish 15)

Latest testrun:

Stockfish 220917:  (+571,=1016,-413)= 54.0% = +28 Elo (+11 Elo to previous test)

Best testrun so far:

Stockfish 220917:  (+571,=1016,-413)= 54.0% = +28 Elo (+6 Elo to previous best)

See all results, get more information and download the games: Click on the yellow link above...


SPCC Top Engines Ratinglist (+ regular testing of Stockfish Dev-versions)

 

Playing conditions:

 

Hardware: Since 20/07/21 AMD Ryzen 3900 12-core (24 threads) notebook with 32GB RAM. 

Speed: (singlethread, TurboBoost-mode switched off, chess starting position) Stockfish 14.1: 750 kn/s (when 20 games are running simultaneously)

Hash: 256MB per engine

GUI: Cutechess-cli (GUI ends game, when a 5-piece endgame is on the board, all other games are played until mate or draw by chess-rules (3fold, 50-moves, stalemate, insufficent material))

Tablebases: None for engines, 5 Syzygy for cutechess-cli

Openings: HERT_500 testset (by Thomas Zipproth) (download the file at the "Download & Links"-section or here). Mention, the HERT-set is not an Anti-Draw (UHO or something) opening-set, but a classical, balanced opening-set.

Ponder, Large Memory Pages & learning: Off

Thinking time: 3min+1sec per game/engine (average game-duration: around  7.5 minutes). One 7000 games-testrun takes about 2 days.The version-numbers of the Stockfish engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file, written backwards (year,month,day))(example: 200807 = August, 7, 2020). The used SF compile is the AVX2-compile, which is the fastest on my AMD Ryzen CPU. SF binaries are taken from abrok.eu (except the official SF-release versions, which are taken form the official Stockfish website).

 

To avoid distortions in the Ordo Elo-calculation, from now, only 2x Stockfish (latest official release + the latest 2 dev-versions)(all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish's older Elo-results can still be seen in the Elo-diagrams below.

 

Latest update: 2022/09/25: Caissa 1.0 (+61 Elo to Caissa 0.9)

(best Stockfish Elo so far: Stockfish 220817 3814 SPCC-Elo)

 

(Ordo-calculation fixed to Stockfish 15 = 3802 Elo)

 

See the individual statistics of engine-results here

See the Engines Aggressiveness Score Ratinglist here

Download the current gamebase here

Download the complete game-archive here

See the full SPCC-Ratinglist (without Stockfish dev-versions) from 2020 until today here

(calculating the EAS-Ratings of the full list has a high effort and will be done only from time to time, not after each test)

     Program                    Elo    +    -  Games    Score   Av.Op. Draws

   1 Stockfish 220907 avx2    : 3811    7    7  7000    69.5%   3664   60.3%
   2 Stockfish 220917 avx2    : 3806    8    8  7000    68.8%   3664   62.0%
   3 Stockfish 15 220418      : 3802    7    7  8000    69.5%   3655   60.7%
   4 KomodoDragon 3.1 avx2    : 3769    6    6 11000    63.5%   3667   66.1%
   5 KomodoDragon 3.1 MCTS    : 3706    6    6 11000    55.3%   3667   69.9%
   6 Berserk 9 avx2           : 3646    6    6 11000    44.6%   3687   67.6%
   7 Revenge 3.0 avx2         : 3645    6    6 12000    46.0%   3676   69.8%
   8 Koivisto 8.13 avx2       : 3643    6    6 10000    42.8%   3697   69.7%
   9 Ethereal 13.75 nnue      : 3624    6    6 12000    43.0%   3678   66.3%
  10 Fire 8.NN avx2           : 3618    6    6 10000    44.5%   3661   61.3%
  11 Slow Chess 2.9 avx2      : 3589    5    5 11000    43.5%   3638   66.4%
  12 Stockfish final HCE      : 3582    6    6 11000    48.6%   3593   59.3%
  13 RubiChess 220813 avx2    : 3579    6    6  9000    56.7%   3531   69.2%
  14 Fire 8.NN MCTS avx2      : 3576    6    6 10000    47.8%   3594   67.4%
  15 rofChade 3.0 avx2        : 3555    5    5 10000    49.9%   3555   68.4%
  16 Seer 2.5.0 avx2          : 3531    5    5 11000    51.0%   3524   67.4%
  17 Minic 3.30 znver3        : 3529    6    6  9000    50.3%   3527   69.9%
  18 Uralochka 3.38c avx2     : 3512    6    6  8000    47.8%   3528   69.3%
  19 Rebel 15.1a avx2         : 3500    6    6  9000    49.4%   3505   63.0%
  20 PowerFritz 18 avx2       : 3479    6    6  7000    52.6%   3460   64.3%
  21 Arasan 23.4 avx2         : 3473    6    6  9000    46.5%   3499   62.7%
  22 Black Marlin 7.0 avx2    : 3467    6    6  9000    51.0%   3459   62.6%
  23 Nemorino 6.00 avx2       : 3459    7    7  7000    53.1%   3435   53.2%
  24 Igel 3.1.0 popavx2       : 3456    6    6  9000    42.2%   3512   67.1%
  25 Devre 4.0 avx2           : 3442    6    6  9000    47.5%   3460   66.0%
  26 Wasp 6.00 avx            : 3441    5    5 10000    49.7%   3443   60.0%
  27 Halogen 10.23.11 avx2    : 3410    6    6  8000    46.5%   3435   63.2%
  28 Clover 3.1 avx2          : 3404    6    6  8000    48.6%   3414   58.2%
  29 Tucano 10.00 avx2        : 3384    6    6  8000    53.4%   3360   57.3%
  30 Velvet 4.1.0 avx2        : 3377    7    7  8000    55.9%   3335   50.8%
  31 Caissa 1.0 avx2          : 3360    6    6  7000    47.0%   3381   55.4%
  32 Coiled 1.1 avx2          : 3352    6    6  9000    48.8%   3361   57.4%
  33 Scorpio 3.0.14d cpu      : 3345    7    7  8000    53.3%   3322   52.1%
  34 Weiss 220905 popc        : 3340    7    7  8000    54.8%   3305   48.9%
  35 Dragon 3 aggressive      : 3317    7    7  8000    49.3%   3322   43.5%
  36 Zahak 10.0 avx           : 3317    7    7  7000    47.1%   3338   48.8%
  37 Gogobello 3 avx2         : 3310    7    7  8000    49.1%   3317   52.2%
  38 Marvin 6.0.0 avx2        : 3308    6    6  9000    50.0%   3308   50.3%
  39 Caissa 0.9 avx2          : 3299    7    7  8000    44.4%   3339   49.0%
  40 Lc0 0.29 dnll 791921     : 3299    7    7  7000    51.6%   3287   46.3%
  41 Combusken 2.0.0 amd64    : 3296    7    7  8000    51.3%   3286   48.6%
  42 Mantissa 3.7.2 avx2      : 3288    7    7  7000    46.9%   3310   53.4%
  43 Stash 33.0 popc          : 3256    7    7  7000    44.7%   3293   46.8%
  44 Chiron 5 x64             : 3250    7    7  9000    41.6%   3311   42.3%
  45 Danasah 9.0 avx2         : 3243    8    8  7000    42.5%   3296   46.0%


Games        : 190000 (finished)

White Wins   : 50004 (26.3 %)
Black Wins   : 25201 (13.3 %)
Draws        : 114795 (60.4 %)
Unfinished   : 0

White Score  : 56.5 %
Black Score  : 43.5 %

The version-numbers (180622 for example) of the engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file. Especially the asmFish-engines are often released much later!! (Stockfish final HCE is Stockfish 200731, the latest version without neural-net and with HCE (=Hand Crafted Evaluation). This engine is (and perhaps will stay forever?) the strongest HCE (Hand Crafted Eval) engine on the planet. IMHO this makes it very interesting for comparsion.)

Some engines are using a nnue-net based on evals of other engines. I decided to test these engines, too. As far as I know the follwing engines use nnue-nets based on evals of other engines (if I missed an engine, please contact me):

Fire 8.NN, Nemorino 6.00, Gogobello 3, Coiled 1.1 (using Stockfish-eval-based nnue nets or nets directly from Stockfish website). Stockfish since 210615, Devre 4 (using Lc0-based nnue nets). Halogen 10.23.11 using a Koivisto-eval-based net.

Some engine-testruns were aborted, because the engine is too weak (below 3200 SPCC-Elo): LittleGoliath 3.15.3

Some engine-testruns were aborted, because the new version was clearly weaker than the engine-version already listed: Fire 220827

Below you find a diagram of the progress of Stockfish in my tests since April 2022

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).

 

 

 

 

 

 

 

 

 


Sie sind Besucher Nr.