Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v21.5.0 #561

Merged
merged 543 commits into from
Apr 7, 2023
Merged

v21.5.0 #561

merged 543 commits into from
Apr 7, 2023

Conversation

bmaltais
Copy link
Owner

@bmaltais bmaltais commented Apr 7, 2023

  • 2023/04/07 (v21.5.0)
    • Update MacOS and Linux install scripts. Thanks @jstayco
    • Update windows upgrade ps1 and bat
    • Update kohya_ss sd-script code to latest release... this is a big one so it might cause some training issue. If you find that this release is causing issues for you you can go back to the previous release with git checkout v21.4.2 and then run the upgrade script for your platform. Here is the list of changes in the new sd-scripts:
      • There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while.

      • The learning rate and dim (rank) of each block may not work with other modules (LyCORIS, etc.) because the module needs to be changed.

      • Fix some bugs and add some features.

        • Fix an issue that .json format dataset config files cannot be read. issue #351 Thanks to rockerBOO!
        • Raise an error when an invalid --lr_warmup_steps option is specified (when warmup is not valid for the specified scheduler). PR #364 Thanks to shirayu!
        • Add min_snr_gamma to metadata in train_network.py. PR #373 Thanks to rockerBOO!
        • Fix the data type handling in fine_tune.py. This may fix an error that occurs in some environments when using xformers, npz format cache, and mixed_precision.
      • Add options to train_network.py to specify block weights for learning rates. PR #355 Thanks to u-haru for the great contribution!

        • Specify the weights of 25 blocks for the full model.
        • No LoRA corresponds to the first block, but 25 blocks are specified for compatibility with 'LoRA block weight' etc. Also, if you do not expand to conv2d3x3, some blocks do not have LoRA, but please specify 25 values ​​for the argument for consistency.
        • Specify the following arguments with --network_args.
        • down_lr_weight : Specify the learning rate weight of the down blocks of U-Net. The following can be specified.
        • The weight for each block: Specify 12 numbers such as "down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1".
        • Specify from preset: Specify such as "down_lr_weight=sine" (the weights by sine curve). sine, cosine, linear, reverse_linear, zeros can be specified. Also, if you add +number such as "down_lr_weight=cosine+.25", the specified number is added (such as 0.25~1.25).
        • mid_lr_weight : Specify the learning rate weight of the mid block of U-Net. Specify one number such as "down_lr_weight=0.5".
        • up_lr_weight : Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.
        • If you omit the some arguments, the 1.0 is used. Also, if you set the weight to 0, the LoRA modules of that block are not created.
        • block_lr_zero_threshold : If the weight is not more than this value, the LoRA module is not created. The default is 0.
      • Add options to train_network.py to specify block dims (ranks) for variable rank.

        • Specify 25 values ​​for the full model of 25 blocks. Some blocks do not have LoRA, but specify 25 values ​​always.
        • Specify the following arguments with --network_args.
        • block_dims : Specify the dim (rank) of each block. Specify 25 numbers such as "block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2".
        • block_alphas : Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.
        • conv_block_dims : Expand LoRA to Conv2d 3x3 and specify the dim (rank) of each block.
        • conv_block_alphas : Specify the alpha of each block when expanding LoRA to Conv2d 3x3. If omitted, the value of conv_alpha is used.
    • Add GUI support for new features introduced above by kohya_ss. Those will be visible only if the LoRA is of type Standard or kohya LoCon. You will find the new parameters under the Advanced Configuration accordion in the Training parameters tab.
    • Various improvements to linux and macos srtup scripts thanks to @Oceanswave and @derVedro
    • Integrated sd-scripts commits into commit history. Thanks to @Cauldrath

shirayu and others added 30 commits February 14, 2023 21:11
Refactor code to make it easier to add new optimizers, and support alternate optimizer parameters

-move redundant code to train_util for initializing optimizers
- add SGD Nesterov optimizers as option (since they are already available)
- add new parameters which may be helpful for tuning existing and new optimizers
kohya-ss and others added 29 commits March 30, 2023 22:29
Fix device issue in load_file, reduce vram usage
Improve container environment detection, improve library linking in containers, ensure we exit after calling gui.sh to avoid any conditions where code continues running.
gui.sh more location independent, better docker environment detection/handling, setup.sh -u option added to skip launching GUI in docker environment
Merge history from kohya-ss/sd-scripts
Fix for git-based captioning on posix systems
the outer quotes are unnecessary, brackets do the job
gui.sh will now be able to handle spaces and other ugly things in the path
@bmaltais bmaltais merged commit 9533285 into master Apr 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.