Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update default main net to nn-1ceb1ade0001.nnue #5090

Closed

Commits on Mar 5, 2024

  1. Update default main net to nn-1ceb1ade0001.nnue

    Created by retraining the previous main net `nn-b1a57edbea57.nnue` with:
    - some of the same options as before:
      - ranger21, more WDL skipping, 15% more loss when Q is too high
    - removal of the huge 514G pre-interleaved binpack
    - removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack)
    - interleaving many binpacks at training time
    - training with some bestmove capture positions where SEE < 0
    - increased usage of torch.compile to speed up training by up to 40%
    
    ```yaml
    experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28
    nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more
    start-from-engine-test-net: True
    
    early-fen-skipping: 28
    training-dataset:
      # similar, not the exact same as:
      # official-stockfish#4635
      - /data/S5-5af/leela96.v2.min.binpack
      - /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack
      - /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack
    
      - /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack
      - /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack
      - /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack
    
      - /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack
      - /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack
      - /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack
      - /data/S5-5af/test80-2023-04-apr-2tb7p.binpack
      - /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack
    
      # official-stockfish#4782
      - /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack
      - /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack
    
      # official-stockfish#4972
      - /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack
      - /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack
      - /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack
    
      # official-stockfish#5056
      - /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack
      - /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack
    
    num-epochs: 800
    lr: 4.375e-4
    gamma: 0.995
    start-lambda: 1.0
    end-lambda: 0.7
    ```
    
    This particular net was reached at epoch 759. Use of more torch.compile decorators
    in nnue-pytorch model.py than in the previous main net training run sped up training
    by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12:
    https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile
    
    Skipping positions with bestmove captures where static exchange evaluation is >= 0
    is based on the implementation from Sopel's NNUE training & experimentation log:
    https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
    Experiment 293 - only skip captures with see>=0
    
    Positions with bestmove captures where score == 0 are always skipped for
    compatibility with minimized binpacks, since the original minimizer sets
    scores to 0 for slight improvements in compression.
    
    The trainer branch used was:
    https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more
    
    Binpacks were renamed to be sorted chronologically by default when sorted by name.
    The binpack data are otherwise the same as binpacks with similar names in the prior
    naming convention.
    
    Training data can be found at:
    https://robotmoon.com/nnue-training-data/
    
    Passed STC:
    https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c
    LLR: 2.92 (-2.94,2.94) <0.00,2.00>
    Total: 149792 W: 39153 L: 38661 D: 71978
    Ptnml(0-2): 675, 17586, 37905, 18032, 698
    
    Passed LTC:
    https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b
    LLR: 2.94 (-2.94,2.94) <0.50,2.50>
    Total: 64416 W: 16517 L: 16135 D: 31764
    Ptnml(0-2): 38, 7218, 17313, 7602, 37
    
    Bench: 1373183
    linrock committed Mar 5, 2024
    Configuration menu
    Copy the full SHA
    ba7586c View commit details
    Browse the repository at this point in the history