Releases: lightvector/KataGo
New Human-like Play and Analysis + more bugfixes
This is a bugfix release for v1.15.0 / v1.15.1 that fixes a few oddities and unexpected behaviors with human SL settings. It fixes one further bug on top of v1.15.2. Thanks again to everyone reporting issues!
Compared to v1.15.0 / v1.15.1 it includes a new example config gtp_human9d_search_example.cfg that demonstrates how to get very strong (possibly mildly-superhuman) play while still getting a nontrivial amount of human style bias from the human SL model.
This release also includes executables compiled for CUDA 12.5, CUDNN 8.9.7, and TensorRT 10.2.0, to give an updated and more recent possible alternative for users who may have had issues with the earlier releases compiled for older CUDA versions.
See the v1.15.0 release page for help and documentation and pretty pictures about the latest-released features, notably the new Human SL model released with v1.15.0.
Changes in v1.15.3
- Fixed a bug where GTP genmove might fail to return a legal move when using a high proportion of humanSL "weightless" visits combined with a low maxVisits.
- Slight adjustment to default values in gtp_human9d_search_example.cfg.
Changes in v1.15.2
- Fixed an issue where weightless visits (e.g. from
humanSLRootExploreProbWeightless
) would not be counted towards the visits or playouts limit for a search. - Fixed an issue where KataGo would ignore human SL exploration parameters on the first visit or two to any node.
- Fixed a compile error when compiling KataGo for testing without any backend. (i.e. with dummy backend)
- Fixed an issue with cuda backend compilation types on some systems.
- Clarified some various docs about human SL model usage, including some minor improved comments in the gtp_human5k_example.cfg
- Added new example config new example config gtp_human9d_search_example.cfg.
New Human-like Play and Analysis + more bugfix
This is not the latest release - see the release v1.15.3 for various bugfixes and the versions that you should actually use.
Still see the v1.15.0 release page for help and documentation and pretty pictures about the major features released with v1.15.x, although the executables there are outdated and/or more buggy compared to v1.15.3
This is a bugfix release for v1.15.0 / v1.15.1 that fixes a few oddities and unexpected behaviors with human SL settings.
It also includes a new example config gtp_human9d_search_example.cfg that demonstrates how to get very strong (possibly mildly-superhuman) play while still getting a nontrivial amount of human style bias from the human SL model.
This release also includes executables compiled for CUDA 12.5, CUDNN 8.9.7, and TensorRT 10.2.0, to give an updated and more recent possible alternative for users who may have had issues with the earlier releases compiled for older CUDA versions.
See the v1.15.0 release page for help and documentation and pretty pictures about the latest-released features, notably the new Human SL model released with v1.15.0.
Changes
- Fixed an issue where weightless visits (e.g. from
humanSLRootExploreProbWeightless
) would not be counted towards the visits or playouts limit for a search. - Fixed an issue where KataGo would ignore human SL exploration parameters on the first visit or two to any node.
- Fixed a compile error when compiling KataGo for testing without any backend. (i.e. with dummy backend)
- Fixed an issue with cuda backend compilation types on some systems.
- Clarified some various docs about human SL model usage, including some minor improved comments in the gtp_human5k_example.cfg
- Added new example config new example config gtp_human9d_search_example.cfg.
New Human-like Play and Analysis + quick bugfix
This is not the latest release - see the release v1.15.3 for various bugfixes and the versions that you should actually use.
Still see the v1.15.0 release page for help and documentation and pretty pictures about the major features released with v1.15.x, although the executables there are outdated and/or more buggy compared to v1.15.3
This is a quick bugfix for v1.15.0 that fixes a minor issue with the Analysis Engine where it would report an error when querying the version. This release also slightly clarifies the documentation in gtp_human5k_example.cfg
on how to use the new Human SL model released with v1.15.0. Please continue to report any issues and we will fix them. :)
New Human-like Play and Analysis
This is not the latest release - see v1.15.3 for various bugfixes and use the code and/or executables there rather than here.
But stay on this page and read on below for info about human-like play and analysis introduced in v1.15.x!
If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here. Also, download the latest neural nets to use with this engine release at https://katagotraining.org/.
KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!
As a reminder, for 9x9 boards, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.
Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.
The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.
Known issues (fixed in v1.15.1)
- Analysis engine erroneously reports an error when sending a
query_version
action.
New Human-trained Model
This release adds a new human supervised learning ("Human SL") model trained on a large number of human games to predict human moves across players of different ranks and time periods! Not much experimentation with it has been done yet and there is probably low-hanging fruit on ways to use and visualize it, open for interested devs and enthusiasts to try.
Download the model linked here or listed in the downloads below, b18c384nbt-humanv0.bin.gz
. Casual users should NOT download b18c384nbt-humanv0.ckpt
- this is an alternate format for devs interested in the raw pytorch checkpoint for experimentation or for finetuning using the python scripts.
Basic usage:
./katago.exe gtp -model <your favorite usual model for KataGo>.bin.gz -human-model b18c384nbt-humanv0.bin.gz -config gtp_human5k_example.cfg
The human model is passed in as an extra model via -human-model
. It is NOT a replacement for the default model (actually it can be if you know what you are doing! See the config and Human SL analysis guide for more details.).
Additionally, you need a config specifically designed to use it. The gtp_human5k_example.cfg
configures KataGo to imitate 5-kyu-level players. You can change it to imitate other ranks too, as well as to do many more things, including making KataGo play in a human style but still at a strong level or analyze in interesting ways. Read the config file itself for documentation on some of these possibilities!
And for advanced users or devs see also this guide to using the human SL model, which is written from the perspective of the JSON-based Analysis Engine, but is also applicable to gtp as well.
Human SL analysis guide
Pretty Pictures
Just to show off how the model has learned how differently ranked players might play, here are example screenshots from a less-trained version of the Human SL model from a debug visualization during development. When guessing what 20 kyu players are likely to play, Black's move is to simply follow White, attaching at J17:
At 1 dan, the model guesses that players are likely to play the tiger mouth spoil or wedge at H17/H16, showing an awareness of local good shape, as well as some likelihood of various pokes at white's loose shape:
At 9 dan, the model guesses that the most likely move is to strike the very specific weak point at G14, which analysis confirms is one of the best moves.
As usual, since this is a raw neural net without any search, its predictions are most analogous to a top player's "first instinct with no reading" and at high dan levels won't be as accurate in guessing what such players, with the ability to read sharply, would likely play.
Another user/dev in the Computer Go discord shared this interesting visualization, where the size of the square is based on the total probability mass of the move summed across all player ranks, and the color and label are the average rank of player that the model predicts playing that move:
Hopefully some of these inspire possibilities for game review and analysis in GUIs or tools downstream of the baseline functionality added by KataGo. If you have a cool idea for experimenting with these kinds of predictions and stats, or think of useful ways to visualize them, feel free to try it!
Other Changes This Release
GTP and Analysis Engine changes
(Updated GTP doc, Updated Analysis Engine Doc)
- Various changes to both GTP and Analysis Engine to support the human SL model, see docs.
- GTP
version
command now reports information about the neural nets(s) used, not just the KataGo executable version. - GTP
kata-set-param
now supports changing the large majority of search parameters dynamically instead of only a few. - GTP
kata-analyze
command now supports a newrootInfo
property for reporting root node stats. - GTP added
resignMinMovesPerBoardArea
as a way to prevent early resignation. - GTP added
delayMoveScale
anddelayMoveMax
as a way to add a randomized delay to moves so to prevent the bot from responding instantly to players. Delay will be on average shorter on "obvious" moves, hopefully giving a more natural-feeling pacing. - Analysis Engine now by default will report a warning in response to queries that contain unused fields, to help alert about typos.
- Analysis Engine now reports various raw neural net outputs in rootInfo.
- GTP and Analysis Engine both have changed "visits" to mean the child node visit count (i.e. the number of playouts that the child node after a move received) instead of the edge visit count (i.e. the number of playouts that the root MCTS formula "wanted" to invest in the move). The child visit count is more indicative of evaluation depth and quality. A new key "edgeVisits" has been added to report the original edge visit count, which is partly indicative of how much the search "likes" the move.
- These two values used to be almost identical in practical cases, although graph search could make them differ sometimes. With some humanSL config settings in this new version, they can now differ greatly.
Misc improvements
- Better error handling in TensorRT, should catch more cases where there are issues querying the GPU hardware and avoid buggy or broken play.
Training Scripts Changes
- Many changes and updates to training scripts to support human SL model training and architecture. Upgrade with caution, if you are actively training things.
- Added experimental sgf->training data command (
./katago writetrainingdata
) to KataGo's C++ side that was used to produce data for human SL net training. There is no particular documentation offered for this, run it with-help
and/or be prepared to read and understand the source code. - Configs for new models now default to model version 15 with a slightly different pass output head architecture.
- Many minor bugfixes and slight tweaks to training scripts.
- Added option to gatekeeper to configure the required winning proportion.
Minor fixes, restore support for TensorRT 8.5
This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.15.0 for a newer release!
Summary and Notes
This is primarily a bugfix release. If you're contributing to distributed training for KataGo, this release also includes a minor adjustment to the bonuses that incentivize KataGo to finish the game cleanly, which might slightly improve robustness of training.
Both this and the prior release support an upcoming larger and stronger "b28" neural net that is currently being trained and will likely be ready soon!
As a reminder, for 9x9 boards, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.
Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.
The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.
Changes in v1.14.1
- Restores support for TensorRT 8.5. Although the precompiled executables are still for TensorRT 8.6 and CUDA 12.1, if you are building from source TensorRT 8.5 along with a suitable CUDA version such as 11.8 should work as well. Thanks to @hyln9 - #879
- Changes ending score bonus to not discourage capture moves, encouraging selfplay to more frequently sample mild resistances and and refute bad endgame cleanup.
- Python neural net training code now randomizes history masking, instead of using a static mask that is generated at data generation time. This should very slightly improve data diversity when reusing data rows.
- Python neural net training code now will clear out nans from running training statistics, so that the stats can remain useful if a neural net during training experiences an exploded gradient but still manages to recover from it.
- Various minor cleanups to code and documentation, including a new document about graph search.
Support upcoming larger "b28" nets and lots of bugfixes
This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.15.0 for a newer release!
Note for CUDA and TensorRT: starting with this release newer versions are required!
- The CUDA version requires CUDA 12.1.x and CUDNN 8.9.7. CUDA 12.1.1 in particular was used for compiling and testing. For CUDA, using a more recent version should work as well. Older versions might work too, but even if they do work, upgrading from a much older version might give a small performance improvement.
- The TensorRT version requires precisely CUDA 12.1.x and TensorRT 8.6.1 ("TensorRT 8.6 GA"). CUDA 12.1.1 in particular was used for compiling and testing.
- Note that CUDA 12.1.x is used even though it is not the latest CUDA version because TensorRT does not yet support CUDA 12.2 or later! So for TensorRT, the CUDA version must not be upgraded beyond that.
Summary and Notes
This release adds upcoming support for a larger and stronger "b28" neural net that is currently being trained and will likely be ready within the next couple of months! This release also fixes a lot of minor bugs and makes a lot of minor improvements.
As a reminder, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.
Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.
The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.
Changes in v1.14.0
New features
- Added support for a new "v15" model format that adds a nonlinearity to the pass policy head. This change is required for the new larger b28c512nbt neural net that should be ready in the next few months and might become the strongest neural net to use for top-tier GPUs.
Engine improvements
- KataGo analysis mode now ignores history prior to the root (except still obeying ko/superko)! This means analysis will no longer be biased by placing stones in an unrealistic ordering when setting up an initial position, or exploring game variations when both players play very bad moves. Pre-root history is still used when KataGo is playing rather than analyzing because it is presumed that KataGo played the whole game as the current player and chose the moves it wanted - if this is not true, see
analysisIgnorePreRootHistory
andignorePreRootHistory
in the config. - Eigen version of KataGo now shares the neural net weights for all threads instead of copying it - this should greatly reduce memory usage when running with multiple threads/cores.
- TensorRT version of KataGo now has a cmake option
USE_CACHE_TENSORRT_PLAN
for custom compiling that can give faster startup times for TensorRT backend at the cost of some disk space (thanks to kinfkong). Do NOT use this for self-play or training, it will use excessive disk space over time and increase the cost of each new neural net. The ideal use case is using only one or a few nets for analysis/play over and over.
Main engine bugfixes
- Fixed bug where KataGo would not try to claim a win under strict scoring rules when forced to analyze a position past when the game should have already ended, and would assume the opponent would not either.
- Fixed bad memory access that might cause mild bias to behavior in filling dame in Japanese rules.
- Fixed issue where when contributing selfplay games to distributed training, if the first web query to katagotraining.org fails the entire program would fail instead of retrying the query like it would retry any web queries thereafter.
- Fixed some multithreading races - avoid any copying of child nodes between arrays during search.
- Fixed bug in parsing certain malformed configs with multiple GPUs specified.
- Fixed bug in determining the implicit player to move on the first turn of an SGF with setup stones.
- Fixed some bugs in recomputing root policy optimism when differing from tree policy optimism in various cases, or when softmax temperature or other parameters differ after pondering.
- Fixed some inconsistencies in how Eigen backend number of threads was determined.
- Shrank the default batch size on Eigen backend since batching doesn't help CPUs much, should make more efficient use of cores with fewer threads now.
- Minor internal code cleanups involving turn numbers, search nodes, and other details. (thanks nerai)
Expert/dev tool improvements
- Tools
- Added
bSizesXY
option to control exact board size distribution including rectangles for selfplay or match commands, instead of only an edge length distribution. See match_example.cfg. - Improved many aspects of book generation code and add more parameters to it that were used for the 9x9 books at katagobooks.org
- The python
summarize_sgfs.py
tool now outputs stats that can identify rock-paper-scissors situations in the Elos. - Added experimental support for dynamic komi in internal test matches.
- Various additional arguments and minor changes and bugfixes to startpos/hintpos commands.
- Added
- Selfplay and training
- By default, training models will now use a cheaper version of repvgg-linear architecture that doesn't actually instantiate the inner 1x1 convolution, but instead adjusts weights and increases the LR on the central square of a 3x3 conv. This change only applies to newly initialized models - existing models will keep the old and slower-training architecture.
- Modernized all the various outdated selfplay config parameters, added readme for them
- Minor (backwards-compatible) adjustments to training data NPZ format, made to better support experimental conversion of human games to NPZ training data.
- Improve shuffle.py and training.py defaults and -help documentation. E.g.
cd python; python shuffle.py -help
. - Various other minor updates to various docs
- Improve and slightly rearrange synchronous loop logic
Expert/dev tool bugfixes
- Fixed wouldBeKoCapture bug in python board implementation.
- Fixed bug where trainingWeight would be ignored on local selfplay hintposes.
- Now clears export cycle counter when migrating a pytorch model checkpoint to newer versions.
- Fixed minor bugs updating selfplay file summarize and shuffle script args.
- Various other minor bugfixes to dev commands and python scripts for training.
Finetuned 9x9 Neural Net
Marking and leaving this as a 'prerelease' since this is NOT intended to be release of a new version of KataGo's source code or binaries, but is a release of a new neural net for KataGo!
For the latest binaries and code, see v1.14.0: https://github.com/lightvector/KataGo/releases/tag/v1.14.0
This is a release of a neural net specially trained for 9x9! On 9x9 boards specifically, this neural net is overall much stronger than KataGo's main distributed training nets on katagotraining.org.
Training
It was finetuned from KataGo's main run nets by data generated from 3 strong personal GPUs for several months on 9x9 games starting in many diverse positions. A large number of 9x9 positions were sampled from various datasets to provide these starting positions:
- The move tree of an earlier failed attempt at generating a 9x9 opening book earlier this year that despite not having good evaluations, extensively covered a wide variety of 9x9 openings.
- Manually-identified blind spot positions.
- Top-level bot games from CGOS.
- Human professional and amateur game collections on 9x9.
- Collections of match games won or lost (but not drawn) by KataGo on 9x9 against other versions of itself, to focus more learning on decisive positions.
- 9x9 games played between versions of KataGo where one side was heavily computationally advantaged against the other.
- Various manually specified openings and handicap stone patterns.
- A tiny number of 7x7 through 11x11 games so that the net didn't entirely forget a basic sense of scaling between board sizes.
Otherwise, the training proceeded mostly the same as KataGo's main run, with essentially the same settings.
Strength
In some fast-game tests with a few hundred playouts per move, this net has sometimes rated as much as 200 Elo stronger on 9x9 boards than KataGo's main run neural nets when using a diverse set of forced openings.
However, it's not easy to be precise because the exact amount can depend a lot on settings and the particular forced opening mixture used for games. For any particular opening or family of openings on 9x9, at top levels you can often get major nonlinearities or nontransitivities between various bots depending on what openings they just so happen to play optimally or not. This is especially the case when having bots just play from the empty board position rather than using a predetermined book or opening mixture, since the games will often severely lack opening diversity.
Also, for 9x9 because bots are strong enough that the game is highly drawish (at fair komi), the Elo difference can depend heavily on the number of visits used, as both sides approach optimal with more visits and draw increasingly frequently with fewer decisive matches.
Overall though, the net generally seems more accurate and efficient at judging common 9x9 positions.
Other Notes and Caveats
-
Don't use this net on board sizes other than 9x9! At least, don't do so while expecting it to be good, you could still do so for fun. It will in fact run on 19x19, but its evaluation quality on 19x19 has degraded in quality and drifted to be offset from fair a lot due to having months of training forgetting about 19x19 and repurposing its capacity for 9x9. It also seems to have forgotten some important joseki lines. It probably will have gotten worse at large-scale fights or big dragons as well.
-
Since it is a different net with randomly different blind spots and quirks, even on size 9x9, this finetuned net probably also has a small proportion of variations that it evaluates or plays worse than KataGo's main run nets. On average it should be much better, but of course it will not always be better.
-
One fun feature is that this net also has a little bit of training for 9x9 handicap games, including the "game" where white has a 78.5 komi** while black has 4 or 5 handicap stones, such that white wins if they live basically anywhere. This training did not reach convergence, but enough that if you try searching with a few millions of playouts, the results are pretty suggestive that white can live if black starts with all four 3-3 points, but not if black gets a fifth stone anywhere reasonable.
(**Area scoring, with 0 bonus for handicap stones as in New Zealand or Tromp-Taylor rules. If you use Chinese rules, you'll need a lower komi due to the extra compensation of N points for N handicap stones, and if you use Japanese rules you'll need a lower komi since the black stones themselves occupy space and reduce the territory. Also leaving a buffer of a few points from 9x9 = 81, like choosing 78.5 instead of 80 or 80.5 is a good idea so that the net is solidly in "if I make a living group I win" and is well separated away from "actually I always win even if I lose the whole board")
Minor test/build fixes
This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!
This release v1.13.2 fixes some automated tests for Homebrew or other automated builds. It doesn't involve relevant changes for ordinary users, so please see release v1.13.0 for the older release and getting started with KataGo and for info on many changes and improvements in v1.13.x! And/or for TensorRT users see v1.13.1.
Although there are no new builds offered for download on this page, if you're building from source this tag v1.13.2 is still a fine version to build from.
TensorRT bugfix
This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!
This is a quick bugfix release for specific to the TensorRT version of KataGo, which fixes the plan cache to avoid naming conflicts with older versions and improve error checking, which may affect some users who build the TensorRT version from source.
Better models and search and training, many improvements
This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!
For the TensorRT version, download it from v1.13.1 which is a quick bugfix release specific to the TensorRT version and which should only matter for users doing custom builds, but for clarity has been minted as a new release.
You can find the latest neural nets at https://katagotraining.org/. This release also features a somewhat outdated but stronger net using a new "optimistic policy" head in v1.13.0, attached below, the latest nets at katagotraining.org will also start including this improvement soon.
Attached here are "bs29" versions of KataGo. These are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so you should use them only when you really want to try large boards.
The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.
Changes in v1.13.0
Modeling improvements
-
Optimistic policy - improved policy head that is biased to look more for unexpectedly good moves. A one-off neural net using this policy head is attached below, KataGo's main nets at https://katagotraining.org/ will begin including the new head soon as well.
-
Softplus error scaling - supports new squared softplus activations for value and score error predictions, as well as adjusted scaling of gradients and post-activation for those predictions, which should fix some rare outliers in overconfidence in these predictions as well as large prediction magnitudes that might result in less-stable training.
Search improvements
-
Fixed a bug with determining the baseline top move at low playouts for policy target pruning, which could cause KataGo at low playouts on small boards to sometimes play extremely bad moves (e.g. the 1-1 point).
-
For GTP and analysis, KataGo will automatically cap the number of threads at about 1/8th the number of playouts being performed, to prevent the worst cases of accidentally misconfiguring KataGo to use many threads being destructive to search quality when testing KataGo with low settings. To override this, you can set the config parameter
minPlayoutsPerThread
.
KataGo GTP/match changes
These are changes relevant to users running bots online or katago internal test matches.
-
Added support for automatically biasing KataGo to avoid moves it played recently in earlier games, giving more move variety for online bots. See the "Automatic avoid patterns" section in cpp/configs/gtp_example.cfg.
-
Updated the behavior of
ogsChatToStderr=true
for gtp2ogs version 8.x.x (https://github.com/online-go/gtp2ogs), for running KataGo on OGS. -
Added a new config parameter
gtpForceMaxNNSize
that may reduce performance on small boards, but avoids a lengthy initialization time when changing board sizes, which is necessary for clients that may toggle the board size on every turn, such as gtp2ogs 8.x.x's pooling manager. -
Fixed a segfault with
extraPairs
when usingkatago match
to run round-robin matches (#777), removed support forblackPriority
pairing logic, and addedextraPairsAreOneSidedBW
to allow one-sided colors for matches.
Analysis engine
- Added an example script python/query_analysis_engine_example.py that demonstrates how to use KataGo's analysis engine easily from Python!
Python code and training script changes
These are relevant to users running training/selfplay. There are many minor changes to some of the python training scripts and bash scripts this release. Please make backups and test carefully if upgrading your training process in case anything breaks your use case!
-
Model configs now support version "12" (corresponding to optimistic policy above) and "13" and "14" (corresponding to softplus error scaling above). Experimental scripts migrate_optimistic_policy.py and migrate_softplus_fix.py and migrate_squared_softplus.py are provided in
python/
for upgrading an old version "11" model. You will also need to train more if you upgrade, to get the model to re-converge. -
The training python code python/train.py now defaults to using a lot of parameters that KataGo's main run was using and that were tested to be effective, but that were NOT default before. Be advised that upgrading to v1.13.0 with an existing training run may change various parameters due to using the new defaults, possibly improving them, but nonetheless changing them.
-
Altered the format of the summary json file output by python/summarize_old_selfplay_files.py and which is called by the shuffler script in python/selfplay/shuffle_loop.sh to cache data and avoid searching every directory on every shuffle. The new format now tracks directory mtimes, avoiding some cases where it might miss new data. For existing training runs, the new scripts should seamlessly load the old format and upgrade it to the new format, however, after having done so, pre-v1.13.0 training code will no longer be able to read that new format if you then try to downgrade again.
-
Rewrote python/selfplay/synchronous_loop.sh to copy and run everything out of a dated directory to avoid concurrent changes to the git repo checkout affecting an ongoing run, and also improved it to use a flag
-max-train-bucket-per-new-data
and other flags to better prevent overfitting without having to so carefully balance games/training epochs size. -
Overhauled documentation on selfplay training to be current with the new pytorch training introduced earlier in releases v1.12.x and to also recommend use of
-max-train-bucket-per-new-data
and related parameters that were not previously highlighted, which give much easier control over the relative selfplay vs training speed. -
Removed confusing logic in the C++ code to split out part of its data as validation data (
maxRowsPerValFile
andvalidationProp
parameters in selfplay cfg files no longer exist). This was not actually used by the training scripts. Instead, the shuffle script python/selfplay/shuffle.sh continues to do this with a random 5% of files, at the level of whole npz data files. This can be a bit chunky if you have too few files, to disable this behavior and just train on all of the data, pass the environment variableSKIP_VALIDATE=1
to shuffle.sh. -
Removed support for self-distillation in python/train.py.
-
Significantly optimized shuffling performance for large numbers of files in python/shuffle.py.
-
Fixed a bug in shuffler in internal file naming that prevented it from shuffling .npz files that were themselves produced by another shuffling.
-
Fixed a bug in python/train.py where
-no-repeat-files
didn't always prevent repeats. -
Selfplay process now accepts hintpos files that end in
.bookposes.txt
and.startposes.txt
rather than only.hintposes.txt
. -
Removed unnecessary/unused and outdated copy of sgfmill from this repo. Install it via pip again if you need it.
-
Standardized python indentation to 4 spaces.
-
Various other flags and minor cleanups for various scripts.
Training logic changes
-
KataGo now clamps komi less aggressively when initializing the rules training, allowing for more games to teach the net about extreme komi.
-
Added a few more bounds on recorded scores for training.
Book generation changes
These are relevant to users using katago genbook
to build opening books or tsumego variation books. See cpp/configs/book/genbook7jp.cfg for an example config.
-
Added some new config prameters
bonusPerUnexpandedBestWinLoss
andearlyBookCostReductionFactor
andearlyBookCostReductionLambda
for exploring high-value unexplored moves, and for expanding more bad early moves for exploring optimal play after deliberately bad openings. -
Added support for expanding multiple book nodes per search, which should be more efficient for generating large books. See new parameters
minTreeVisitsToRecord
etc. in the example config. -
Added some other minor book-specific search parameters.
-
Fixed a bug where the book would report nonsensica...