Stefan Pohl Computer Chess

private website for chessengine-tests


 

The Drawkiller Openings Project - the future of Computerchess

 

Idea: Stefan Pohl, basic-work: Stefan Pohl and Hauke Lutz, development and testing: Stefan Pohl

 

Current version: 1.5 (I added a small set with 500 opening-lines, which gave the lowest-draw-rate of all Drawkiller-sets)

 

 

Download the Drawkiller-openings here

 

Download the testgames (Drawkiller, SALC, FEOBOS Noomen) here

 

 

The Drawkiller openings are based on the main ideas of my SALC openings, which were combined with a very good idea of Hauke Lutz: Very short openings-lines, which only contain pawn-moves.

The Drawkiller openings are not for playing vs. other books – all engines must use the Drawkiller openings in a tournament or testrun!

 

The original SALC openings were filtered out of human chess-games (filtered out of the BigDatabase).

SALC means "S"hort "A"nd "L"ong "C"astling: white and black castling to opposite sides (if white played 0-0, black played 0-0-0. If white played 0-0-0, black played 0-0)), both queens still on board. No double games. When using SALC-openings, the chance for attacks towards the opponent king is much higher than using normal opening-books. Because of this, computerchess using SALC openings, will bring more action and fun to watch (and a measureable lower number of draws), because the faster the computers get, the higher the quality of computerchess get and the higher the draw-rate in engine-engine-matches get...so the computerchess is in danger to die the "draw-death" in the near future. So, using SALC openings will give computerchess a future beyond playing only draws or using strange and incorrect gambit-openings for a lower draw-rate!

The SALC-openings were a huge success. The draw-rates in the testruns were lowered from 63%-64% (using standard openings) to around 48% without pushing the scores of asmFish and Komodo closer to 50%. For more information please checkout the Readme-file in the SALC V5.02 folder, which you can download on my website, too.

 

The problem of filtering SALC opening-lines out of a database of human games is, that those SALC-positions are rare. In order to get a huge number of SALC-positions, it was necessary to make the opening-lines quite long (8 moves for the small 500 openings-set and 10 moves for the big 25000 openings-set and opening-book). And with 10 human moves played, the opening is done and the middlegame is starting already.

 

The idea of the new Drawkiller openings is, not to finish the opening in the opening-book, but to let the engines play their own opening-moves. This is especially important, when testing the new neural-net based engines (LC Zero for example), which are playing very strong and creative in the opening.

 

How can this be done?

 

Some months ago, Hauke Lutz had the great idea, to build some openings-sets, which contain pawn moves only (4 pawn-plies, 8 pawn-plies), so the engines had to move all other pieces in the openings by themselves. These pawn-plies files do not contain strange pawn-moves (a4,b4,g4,h4), which would lead to really strange positions on the chessboard.

 

Of course, these pawn-move opening lines can not be SALC-positions, because all other pieces stay on their starting position. But then, I had the idea, that it is possible to combine these pawn-move openings with some moves, which

 

a) move queen, bishop and knight „out of the way“, then

 

b) (move the white king to a1, the a1-rook to d1 and the queen to e1) or (move the white king to h1, the h1-rook to e1). And the same for black, moving the king to the opposite side of the board (a8 or h8), so, when the white king is on a1, the black king is on h8, when the white king is on h1, the black king is on a8. That creates „SALC“-like starting-positions, with both queens never on the same line (one queen is on e-line, one queen stays on d-line – that prevents early queen-captures, when the d-line is opened! - I took this idea from my SALC-half-closed positions, which measureable lowered the draw-rate (around -5%) compared to not-half-closed SALC-positions).

 

c) move bishop and knight back to their normal starting position

 

And, when this is done, the pawn moves of Hauke Lutz are played.


 

These are the two move-sequences, which create the „SALC“-like king- and rook-positions on the chessboard:

 

1. e3 d6 2. Nh3 Na6 3. Bc4 Bf5 4. Ke2 Qd7 5. Na3 Qe6 6. Nb1 Kd7 7. Re1 Rd8 8. Kf1 Kc8 9. Kg1 Kb8 10. Kh1 Ka8 11. Ng1 Nb8 12. Bf1 Qd7 13. Nh3 Qe8 14. Ng1 Bc8

1. d3 e6 2. Na3 Nh6 3. Bf4 Bc5 4. Qd2 Qf6 5. Qe3 Na6 6. Kd2 Nb8 7. Rd1 Ke7 8. Kc1 Re8 9. Kb1 Kf8 10. Ka1 Kg8 11. Qd2 Kh8 12. Qe1 Qd8 13. Nb1 Ng8 14. Bc1 Bf8


 

As you can see, only one pawn-move (1.e3/d3 1...e6/d6) is needed, to open a way for bishop and queen for each color. This pawn-moves were filtered out of the pawn-move openings by Hauke Lutz – then the pawn-move opening-lines could be linked together with the move-sequences from above, without generating impossible (illegal) move-sequences. The result are „SALC“-like openings (=kings on different sides of the board), without any piece, except pawns, the king and one rook and one of the two queens, moved away from their normal starting positions.

 

Here an example of a „complete“ artificial SALC opening-line out of the drawkiller_tournament.pgn file:

 

1. e3 d6 2. Nh3 Na6 3. Bc4 Bf5 4. Ke2 Qd7 5. Na3 Qe6 6. Nb1 Kd7 7. Re1 Rd8 8. Kf1 Kc8 9. Kg1 Kb8 10. Kh1 Ka8 11. Ng1 Nb8 12. Bf1 Qd7 13. Nh3 Qe8 14. Ng1 Bc8 15. h3 c6 16. e4 f6 17. d4 e5

 

 

(the line is 17 moves deep, but only 3 pawn moves were made (and the kings „traveled“ to the edge of the board), so for engine-play, this opening-line is only 3 moves deep and the engines have to play the whole opening by themselves and have to move all non-pawns-pieces from the baseline).

 

These Drawkiller openings combine the advantages of very short opening-lines and the much more spectacular and much less drawish chess, my SALC-idea brings to computerchess.

 

Important: Mention, that the Drawkiller openings contain only normal chess-moves, each line is starting from the normal starting-position of classical chess. Drawkiller openings are not any kind of a chess-variant, like Shuffle-Chess or Chess960 or something like that! Because of this, it was possible to build opening-books for the ChessGUIs (Fritz, Arena, Shredder) out of the Drawkiller openings, which can be used for engine-tournaments in that GUIs. And each chess-engine on the planet can play chess, with using Drawkiller openings, because they are normal, classical chess!!!

 

 

Test results:

 

(asmFish 170426 vs. Komodo 10.4, 5'+3'' time-control, singlecore, no ponder, no endgame-bases, LittleBlitzerGUI, 1000 games each testrun(!) except Noomen Gambit-lines (only 246 positions, so 492 games were played) and Noomen TCEC Superfinal (only 100 positions, so 200 games were played))

 

Stockfish Framework standard 8 move openings: Score 60.3% – 39.7%, draws: 63.4%

FEOBOS v20 contempt 5 top 500 openings: Score 58.7% - 41.3%, draws: 64.1%

HERT 500 set: Score: 60.6% - 39.4%, draws: 60.4%

Noomen Gambit-Lines: Score 59.1% - 40.9%, draws: 59.3%

4 GM-moves short book: Score 60.5% - 39.5%, draws: 57.1%

Noomen TCEC Superfinal (Season 9+10): Score: 62.5% - 37.5%, draws: 50.0%

SALC V5 half-closed: Score 61.6% - 38.4%, draws: 49.2%

SALC V5 full-closed 500 positions: Score 66.5% - 33.5%, draws: 47.7%

 

NEW:

 

Drawkiller (Big set): Score 63.8% - 36.2%, draws: 39.5%

 

Drawkiller (Normal set): Score: 65.3% - 34.7%, draws: 33.5%

Drawkiller (Tournament set): Score: 65.3% - 34.7%, draws: 33.5%

 

New in V1.5:

Drawkiller (small 500 set): Score: 66.4% - 33.6%, draws: 30.5% (!!!)

 

 

(no mistake by me: the results of the normal-set and the tournament-set were exactly the same after 1000 played games in my testruns)
 

As you can see, the Drawkiller openings are not just an improvement over my SALC openings, they are a breakthrough into another dimension! Never before any openings-set gave such low draw-rates without crunching the scores of the engines towards 50%, but instead pushing the scores away from 50%. The Drawkiller Normal- and Tournament sets nearly halve the draw-rate, compared to FEOBOS or the Stockfish Framework 8-move openings. And the small 500 set has more than a halved draw-rate compared to FEOBOS or the Stockfish Framework 8-move openings.

I would never have expected, that this was possible – the Drawkiller project is really a breakthrough into another dimension. And the Drawkiller project kills the draw-death of computerchess for the next decades – mention a TCEC-tournament with nearly halved draw-rates... how awesome would be that?!?

 

 

 

Enjoy Drawkiller-Chess


 

As you can see in the testing results, the drawrate of the big Drawkiller openings is a little bit higher, than the drawrate of the tournament-files and the normal-files. So, I recommend, to use always the smallest Drawkiller openings set/book, which is possible for your engine tournament or testrun. That will give the best results. For the most tournaments and testruns, the normal-files (13318 different endpositions) or the tournament-files (6848 different endpositions) should be big enough. Use the Big-files only for very big tournaments with many, many games - I would recommend the big-files only for engine-developers, who want to measure small Elo-gains of single patches. The very best results you will get, using the Drawkiller_small_500 openings, which contain 500 opening-lines with only 4 pawn-plies plus drawkiller-lines. See the testresults above.

 

The Drawkiller openings contain the raw-data, too. The raw-data contains the unfiltered, unchecked and unmixed pgn-files. Do not use these files for engine-play, they contain very bad endpositions for white or black. The raw-data was included, to make it possible, to filter that raw-data in the future again (with stronger engines or faster machines), so the Drawkiller openings can be rebuilt in the future!

 

 

The Drawkiller openings were filtered out of this raw-data with Komodo 11.2.2. Komodo checked all endpositions (using pgnscanner-tool), running on a i7-6700HQ 2.6GHz Notebook (Skylake CPU) with all 4 cores and 2048 Hash, Contempt=0.

 

The Komodo evaluation had to be in that interval for the big-files and the normal-files, otherwise, the position was deleted:

eval: [-0.49;-0.10] or [+0.10;+0.49]

 

For the tournament-files, the Komodo evaluation had to be in an even smaller interval:

eval: [-0.39;-0.20] or [+0.20;+0.39]

 

For the small 500-file, the Komodo evaluation had to be in an even smaller interval:

eval: [-0.36;-0.25] or [+0.25;+0.36]

 

 

You can see, that the eval-intervals are quite small. No endposition of any Drawkiller opening gives a huge advantage to white or black!

 

Thinking-time for each endposition was:

small 500 set: 60''

Normal/Tournament set: 45''

Big set: 30''


 

 

Copyright (C) 2018 by Stefan Pohl, except the used pawn-plies sets (4pp and 8pp (6pp was built by Stefan Pohl out of the 4pp-set, too)): (C) Hauke Lutz


 

 

2019/01/06 One of the biggest opening-sets testings of all time!

 

8 opening-sets were tested: Drawkiller tournament, SALC V5, Noomen (TCEC openings Season 9-13 Superfinal and Gambit-openings), Stockfish Framework 2-moves and 8-moves openings, 4 GM moves (out of MegaBase 2018, checked with Komodo), the HERT set by Thomas Zipproth and FEOBOS v20.1 contempt 3 (using contempt 3 openings is recommended by the author, Frank Quisinsky). 7 engines played a 2100 games RoundRobin-tournament with each opening-set (not openings-set playing vs. another opening-set!). For each game one opening-line was chosen per random by the GUI.

7 engines played round-robin: Stockfish 10, Houdini 6, Komodo 12, Fire 7.1, Ethereal 11.12, Komodo 12.2.2 MCTS, Shredder 13. = 100 games were played in each head-to-head competition. In each round-robin, each engine played 600 games.

Singlecore, 3'+1'', LittleBlitzerGUI, no ponder, no bases, 256 MB Hash, i7-6700HQ 2.6GHz Notebook (Skylake CPU), Windows 10 64bit. 3 games running in parallel, each testrun took 3-4 days, depending on the average game-duration. Draw adjucation after 130 played moves by the engines (after finishing opening-line)

 

Download all 8 x 2100 = 16800 played games here


Conclusions (all data and results below!):

 

First of all the main question: Why are low draw-rates and wide Elo-spreadings of engine testing-results better? You find the answer here

 

This excellent experiment of Andreas Strangmueller shows without any doubt, that:

The more thinking-time (or faster hardware, thats the same!) the computerchess gets, the more the draw-rates climb and the more the Elo-spreadings shrink. So, it is only a question of time, that the draw-rates will get so high and the Elo-spreading of testings-results will get so small, that engine-testing or engine-tournaments will no longer give any valuable results, because the Elo-differences of results will always stay inside the errorbars, even with thousands of played games. So, it is absolute necessary to lower the draw-rates and raise the Elo- spreadings, if computerchess shall survive the next decades!

Therefore the follwing conclusions of this huge experiment with different opening-sets:

 

1) The Drawkiller openings are a breakthrough into another dimension of engine-testing: The overall draw-rate (27%) is nearly halved, compared to classical openings sets (FEOBOS (51.3%), Stockfish Framework 8moves openings (51.9%)) AND the Elo-spreading is around +150 Elo better (!!), so the rankings are much more stable and reliable, because the errorbars of all results are nearly the same in all testruns. And the average game-duration, using Drawkiller, was 11.5% lower, than using a classical opening-set. So, in the same time, you can play more than +10% games on the same machine, which improves the quality of the results, too, because the errorbars get smaller with more played games. Download the future of computerchess (the Drawkiller openings): here

 

2) The order of rank of the engines is in all mini-ratinglists generated by ORDO out of these testruns exactly the same. So, what we learn here, is, that it does not matter, if an opening-set contains all ECO-codes (FEOBOS does!) or not (Drawkiller, SALC V5 do definitly not!). The order of rank of engines in a ratinglist is exactly the same! So, the over and over repeated statement of many people, that using all or the mostly played ECO-codes (by human players) in an opening-set is important for engine-testing, because otherwise the results are distorted, is a FAIRY TALE and nothing else !!!

 

3) At the bottom, I added the CEGT and CCRL ratinglists with the same engines, which were used for this project (nearly the same versions (Ethereal 11 instead Ethereal 11.12 for example)). There you can see, that the ranking in these ratinglist is exactly the same, too. So, what we learn here, is, that the over and over repeated statement of many people, that it is necessary to test engines versus a lot of opponents for a valid rating/ranking is a FAIRY TALE, too: 6 opponents gave the same ranking-results in all testruns of this project, than in CEGT in CCRL with much, much more opponents.

 

Long summary (with ratinglists):


Drawkiller tournament:

 

Avg game length = 389.777 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3459   23   23   600    82.6 %   3157   21.8 %
   2 Houdini 6 pext         : 3356   20   20   600    71.1 %   3174   29.2 %
   3 Komodo 12 bmi2         : 3294   20   20   600    63.3 %   3184   30.8 %
   4 Fire 7.1 popc          : 3145   19   19   600    42.8 %   3209   30.3 %
   5 Ethereal 11.12 pext    : 3076   19   19   600    33.5 %   3221   28.0 %
   6 Komodo 12.2.2 MCTS     : 3060   20   20   600    31.4 %   3223   25.2 %
   7 Shredder 13 x64        : 3011   20   20   600    25.3 %   3231   23.3 %

 

Elo-spreading: from first to last: 448 Elo

Number of early draws:
first 10 moves played by engines: 0 draws= 0%
first 20 moves played by engines: 10 draws= 0.48%
first 30 moves played by engines: 46 draws= 2.19%

 

Games        : 2100 (finished)
White Wins   : 822 (39.1 %)
Black Wins   : 712 (33.9 %)
Draws        : 566 (27.0 %)
White Score  : 52.6 %
Black Score  : 47.4 %


 

SALC V5:

 

Avg game length = 399.781 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3404   21   21   600    78.3 %   3166   32.8 %
   2 Houdini 6 pext         : 3304   19   19   600    65.4 %   3183   39.8 %
   3 Komodo 12 bmi2         : 3266   18   18   600    60.1 %   3189   44.5 %
   4 Fire 7.1 popc          : 3166   18   18   600    45.3 %   3206   46.2 %
   5 Ethereal 11.12 pext    : 3120   18   18   600    38.4 %   3213   43.2 %
   6 Komodo 12.2.2 MCTS     : 3076   19   19   600    32.2 %   3221   34.3 %
   7 Shredder 13 x64        : 3063   19   19   600    30.4 %   3223   35.5 %

 

Elo-spreading: from first to last: 341 Elo

Number of early draws:
first 10 moves played by engines: 5 draws= 0.24%
first 20 moves played by engines: 39 draws= 1.86%
first 30 moves played by engines: 81 draws= 3.86%

 

Games        : 2100 (finished)
White Wins   : 689 (32.8 %)
Black Wins   : 582 (27.7 %)
Draws        : 829 (39.5 %)
White Score  : 52.5 %
Black Score  : 47.5 %

 

 

Noomen (TCEC openings Season 9-13 Superfinal and Gambit-openings (477 lines)):

 

Avg game length = 405.223 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3388   20   20   600    76.8 %   3169   39.7 %
   2 Houdini 6 pext         : 3289   19   19   600    63.6 %   3185   42.8 %
   3 Komodo 12 bmi2         : 3257   18   18   600    58.8 %   3191   42.0 %
   4 Fire 7.1 popc          : 3170   17   17   600    45.6 %   3205   45.8 %
   5 Ethereal 11.12 pext    : 3129   18   18   600    39.5 %   3212   46.3 %
   6 Komodo 12.2.2 MCTS     : 3091   18   18   600    33.9 %   3218   40.2 %
   7 Shredder 13 x64        : 3076   18   18   600    31.8 %   3221   43.8 %

 

Elo-spreading: from first to last: 312 Elo

Number of early draws:
first 10 moves played by engines: 7 draws= 0.33%
first 20 moves played by engines: 32 draws= 1.52%
first 30 moves played by engines: 90 draws= 4.29%

 

Games        : 2100 (finished)
White Wins   : 691 (32.9 %)
Black Wins   : 507 (24.1 %)
Draws        : 902 (43.0 %)
White Score  : 54.4 %
Black Score  : 45.6 %

 

 

Stockfish Framework 2moves openings:

 

Avg game length = 430.108 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3395   20   20   600    77.5 %   3168   35.0 %
   2 Houdini 6 pext         : 3291   18   18   600    63.8 %   3185   46.8 %
   3 Komodo 12 bmi2         : 3254   18   18   600    58.4 %   3191   48.8 %
   4 Fire 7.1 popc          : 3164   17   17   600    44.8 %   3206   46.2 %
   5 Ethereal 11.12 pext    : 3142   18   18   600    41.5 %   3210   48.0 %
   6 Komodo 12.2.2 MCTS     : 3092   19   19   600    34.1 %   3218   44.8 %
   7 Shredder 13 x64        : 3062   19   19   600    30.0 %   3223   40.0 %

 

Elo-spreading: from first to last: 333 Elo

Number of early draws:
first 10 moves played by engines: 1 draws= 0.05%
first 20 moves played by engines: 12 draws= 0.57%
first 30 moves played by engines: 31 draws= 1.48%

 

Games        : 2100 (finished)
White Wins   : 689 (32.8 %)
Black Wins   : 482 (23.0 %)
Draws        : 929 (44.2 %)
White Score  : 54.9 %
Black Score  : 45.1 %

 

 

4 GM moves (out of MegaBase 2018, checked with Komodo):

 

Avg game length = 449.414 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3396   20   20   600    77.5 %   3167   37.3 %
   2 Houdini 6 pext         : 3307   19   19   600    65.9 %   3182   48.2 %
   3 Komodo 12 bmi2         : 3262   18   18   600    59.5 %   3190   52.0 %
   4 Fire 7.1 popc          : 3151   17   17   600    42.9 %   3208   48.5 %
   5 Ethereal 11.12 pext    : 3119   18   18   600    38.2 %   3213   52.0 %
   6 Komodo 12.2.2 MCTS     : 3099   18   18   600    35.3 %   3217   45.3 %
   7 Shredder 13 x64        : 3066   19   19   600    30.7 %   3222   41.7 %

 

Elo-spreading: from first to last: 330 Elo

Number of early draws:
first 10 moves played by engines: 1 draws= 0.05%
first 20 moves played by engines: 7 draws= 0.33%
first 30 moves played by engines: 25 draws= 1.19%

 

Games        : 2100 (finished)
White Wins   : 679 (32.3 %)
Black Wins   : 446 (21.2 %)
Draws        : 975 (46.4 %)
White Score  : 55.5 %
Black Score  : 44.5 %

 

 

HERT set (500 pos):

 

Avg game length = 442.339 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3384   20   20   600    76.3 %   3169   42.2 %
   2 Houdini 6 pext         : 3300   19   19   600    65.1 %   3183   48.2 %
   3 Komodo 12 bmi2         : 3270   19   19   600    60.7 %   3188   53.3 %
   4 Fire 7.1 popc          : 3139   18   18   600    41.0 %   3210   52.3 %
   5 Ethereal 11.12 pext    : 3131   18   18   600    39.8 %   3212   50.8 %
   6 Komodo 12.2.2 MCTS     : 3108   18   18   600    36.4 %   3215   46.5 %
   7 Shredder 13 x64        : 3068   19   19   600    30.8 %   3222   44.3 %

 

Elo-spreading: from first to last: 316 Elo

Number of early draws:
first 10 moves played by engines: 4 draws= 0.19%
first 20 moves played by engines: 19 draws= 0.90%
first 30 moves played by engines: 46 draws= 2.19%

 

Games        : 2100 (finished)
White Wins   : 661 (31.5 %)
Black Wins   : 426 (20.3 %)
Draws        : 1013 (48.2 %)
White Score  : 55.6 %
Black Score  : 44.4 %

 

 

FEOBOS v20.1 contempt 3:

 

Avg game length = 437.481 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3365   19   19   600    73.9 %   3173   45.5 %
   2 Houdini 6 pext         : 3301   19   19   600    65.3 %   3183   51.8 %
   3 Komodo 12 bmi2         : 3265   18   18   600    60.0 %   3189   55.0 %
   4 Fire 7.1 popc          : 3161   17   17   600    44.2 %   3206   59.7 %
   5 Ethereal 11.12 pext    : 3151   18   18   600    42.6 %   3208   53.8 %
   6 Komodo 12.2.2 MCTS     : 3094   18   18   600    34.3 %   3218   47.5 %
   7 Shredder 13 x64        : 3063   19   19   600    29.8 %   3223   45.7 %

 

Elo-spreading: from first to last: 302 Elo

Number of early draws:
first 10 moves played by engines: 0 draws= 0%
first 20 moves played by engines: 22 draws= 1.05%
first 30 moves played by engines: 61 draws= 2.90%

 

Games        : 2100 (finished)
White Wins   : 638 (30.4 %)
Black Wins   : 385 (18.3 %)
Draws        : 1077 (51.3 %)
White Score  : 56.0 %
Black Score  : 44.0 %

 

 

Stockfish Framework 8moves openings:

 

Avg game length = 438.899 sec

 

     Program                  Elo    +    -   Games   Score   Av.Op.  Draws

   1 Stockfish 10 bmi2      : 3363   19   19   600    73.9 %   3173   44.8 %
   2 Houdini 6 pext         : 3276   18   18   600    61.8 %   3187   52.8 %
   3 Komodo 12 bmi2         : 3267   18   18   600    60.3 %   3189   55.7 %
   4 Fire 7.1 popc          : 3167   17   17   600    45.0 %   3206   54.0 %
   5 Ethereal 11.12 pext    : 3140   17   17   600    40.8 %   3210   52.3 %
   6 Komodo 12.2.2 MCTS     : 3106   18   18   600    35.8 %   3216   52.0 %
   7 Shredder 13 x64        : 3082   18   18   600    32.3 %   3220   51.7 %

 

Elo-spreading: from first to last: 281 Elo

Number of early draws:
first 10 moves played by engines: 6 draws= 0.29%
first 20 moves played by engines: 20 draws= 0.95%
first 30 moves played by engines: 53 draws= 2.52%

 

Games        : 2100 (finished)
White Wins   : 610 (29.0 %)
Black Wins   : 400 (19.0 %)
Draws        : 1090 (51.9 %)
White Score  : 55.0 %
Black Score  : 45.0 %

 

 

For comparsion:

 

CEGT 40/4 ratinglist (singlecore):

 

1     Stockfish 10.0 x64 1CPU     3450
2     Houdini 6.0 x64 1CPU        3372
3     Komodo 12.1.1 x64 1CPU      3337
4     Fire 7.1 x64 1CPU           3242
5     Ethereal 11.00 x64 1CPU     3186
6     Komodo 12.2 x64 1CPU (MCTS) 3182
7     Shredder 13 x64 1CPU        3152

 

Elo-spreading: from first to last: 298 Elo

 

 

CCRL 40/4 ratinglist (singlecore):

 

1    Stockfish 10 64-bit          3498
2    Houdini 6 64-bit             3446
3    Komodo 12 64-bit             3410
4    Fire 7.1 64-bit              3333
5    Ethereal 11.00 64-bit        3301
6    Komodo 12.2.2 MCTS 64-bit    3288
7    Shredder 13 64-bit           3269

 

Elo-spreading: from first to last: 229 Elo by bayeselo. (With ORDO: 276 Elo)