Skip to content

Releases: bmaltais/kohya_ss

v21.5.8

05 May 11:31
52dd50f
Compare
Choose a tag to compare

The release contain significant requirements changes. Make sure to re-run setup.bat to re-install all the requirements... especially since bitsandbytes module is replaced and no longer required windows dll patching. The old patches need to be removed by the setup.bat script for proper execution. Also note that this has been tested only on my system so it may not work as weel as the previous release... so be ready to roll back to the previous release if you run into issues.

If you need to roll back run this code:

git checkout v21.5.7
.\setup.bat
  • 2023/04/05 (v21.5.8)
    • Add Cache latents to disk option to the gui.
    • When saving v2 models in Diffusers format in training scripts and conversion scripts, it was found that the U-Net configuration is different from those of Hugging Face's stabilityai models (this repository is "use_linear_projection": false, stabilityai is true). Please note that the weight shapes are different, so please be careful when using the weight files directly. We apologize for the inconvenience.
      • Since the U-Net model is created based on the configuration, it should not cause any problems in training or inference.
      • Added --unet_use_linear_projection option to convert_diffusers20_original_sd.py script. If you specify this option, you can save a Diffusers format model with the same configuration as stabilityai's model from an SD format model (a single *.safetensors or *.ckpt file). Unfortunately, it is not possible to convert a Diffusers format model to the same format.
    • Lion8bit optimizer is supported. PR #447 Thanks to sdbds!
      • Currently it is optional because you need to update bitsandbytes version. See "Optional: Use Lion8bit" in installation instructions to use it.
    • Multi-GPU training with DDP is supported in each training script. PR #448 Thanks to Isotr0py!
    • Multi resolution noise (pyramid noise) is supported in each training script. PR #471 Thanks to pamparamm!
    • Add --no-cache-dir to reduce image size @chiragjn

v21.5.7

01 May 23:12
fa41e40
Compare
Choose a tag to compare
  • 2023/05/01 (v21.5.7)
    • tag_images_by_wd14_tagger.py can now get arguments from outside. PR #453 Thanks to mio2333!
    • Added --save_every_n_steps option to each training script. The model is saved every specified steps.
      • --save_last_n_steps option can be used to save only the specified number of models (old models will be deleted).
      • If you specify the --save_state option, the state will also be saved at the same time. You can specify the number of steps to keep the state with the --save_last_n_steps_state option (the same value as --save_last_n_steps is used if omitted).
      • You can use the epoch-based model saving and state saving options together.
      • Not tested in multi-GPU environment. Please report any bugs.
    • --cache_latents_to_disk option automatically enables --cache_latents option when specified. #438
    • Fixed a bug in gen_img_diffusers.py where latents upscaler would fail with a batch size of 2 or more.
    • Fix triton error
    • Fix issue with merge lora path with spaces
    • Added support for logging to wandb. Please refer to PR #428. Thank you p1atdev!
      • wandb installation is required. Please install it with pip install wandb. Login to wandb with wandb login command, or set --wandb_api_key option for automatic login.
      • Please let me know if you find any bugs as the test is not complete.
    • You can automatically login to wandb by setting the --wandb_api_key option. Please be careful with the handling of API Key. PR #435 Thank you Linaqruf!
    • Improved the behavior of --debug_dataset on non-Windows environments. PR #429 Thank you tsukimiya!
    • Fixed --face_crop_aug option not working in Fine tuning method.
    • Prepared code to use any upscaler in gen_img_diffusers.py.
    • Fixed to log to TensorBoard when --logging_dir is specified and --log_with is not specified.
    • Add new docker image solution.. Thanks to @Trojaner

v21.5.5

23 Apr 00:18
6365708
Compare
Choose a tag to compare
  • 2023/04/22 (v21.5.5)
    • Update LoRA merge GUI to support SD checkpoint merge and up to 4 LoRA merging
    • Fixed lora_interrogator.py not working. Please refer to PR #392 for details. Thank you A2va and heyalexchoi!
    • Fixed the handling of tags containing _ in tag_images_by_wd14_tagger.py.
    • Add new Extract DyLoRA gui to the Utilities tab.
    • Add new Merge LyCORIS models into checkpoint gui to the Utilities tab.
    • Add new info on startup to help debug things

v21.5.4

18 Apr 01:03
65ee723
Compare
Choose a tag to compare
  • 2023/04/17 (v21.5.4)
    • Fixed a bug that caused an error when loading DyLoRA with the --network_weight option in train_network.py.
    • Added the --recursive option to each script in the finetune folder to process folders recursively. Please refer to PR #400 for details. Thanks to Linaqruf!
    • Upgrade Gradio to latest release
    • Fix issue when Adafactor is used as optimizer and LR Warmup is not 0: #617
    • Added support for DyLoRA in train_network.py. Please refer to here for details (currently only in Japanese).
    • Added support for caching latents to disk in each training script. Please specify both --cache_latents and --cache_latents_to_disk options.
      • The files are saved in the same folder as the images with the extension .npz. If you specify the --flip_aug option, the files with _flip.npz will also be saved.
      • Multi-GPU training has not been tested.
      • This feature is not tested with all combinations of datasets and training scripts, so there may be bugs.
    • Added workaround for an error that occurs when training with fp16 or bf16 in fine_tune.py.
    • Implemented DyLoRA GUI support. There will now be a new 'DyLoRA Unitslider when the LoRA type is selected askohya DyLoRA` to specify the desired Unit value for DyLoRA training.
    • Update gui.bat and gui.ps1 based on: #188
    • Update setup.bat to install torch 2.0.0 instead of 1.2.1. If you want to upgrade from 1.2.1 to 2.0.0 run setup.bat again, select 1 to uninstall the previous torch modules, then select 2 for torch 2.0.0

v21.5.2

12 Apr 18:20
cc52c73
Compare
Choose a tag to compare
  • 2023/04/09 (v21.5.2)

    • Added support for training with weighted captions. Thanks to AI-Casanova for the great contribution!
    • Please refer to the PR for details: PR #336
    • Specify the --weighted_captions option. It is available for all training scripts except Textual Inversion and XTI.
    • This option is also applicable to token strings of the DreamBooth method.
    • The syntax for weighted captions is almost the same as the Web UI, and you can use things like (abc), [abc], and (abc:1.23). Nesting is also possible.
    • If you include a comma in the parentheses, the parentheses will not be properly matched in the prompt shuffle/dropout, so do not include a comma in the parentheses.
    • Run gui.sh from any place
    • Add upgrade support for MacOS

What's Changed

New Contributors

Full Changelog: v21.5.1...v21.5.2

v21.5.1

09 Apr 19:57
829b923
Compare
Choose a tag to compare
  • 2023/04/08 (v21.5.1)
    • Integrate latest sd-scripts updates. Not integrated in the GUI. Will consider if you think it is wort integrating. At the moment you can add the required parameters using the Additional parameters field under the Advanced Configuration accordion in the Training Parameters tab:
      • There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while.

      • Added a feature to upload model and state to HuggingFace. Thanks to ddPn08 for the contribution! PR #348

      • When --huggingface_repo_id is specified, the model is uploaded to HuggingFace at the same time as saving the model.

      • Please note that the access token is handled with caution. Please refer to the HuggingFace documentation.

      • For example, specify other arguments as follows.

        • --huggingface_repo_id "your-hf-name/your-model" --huggingface_path_in_repo "path" --huggingface_repo_type model --huggingface_repo_visibility private --huggingface_token hf_YourAccessTokenHere
      • If public is specified for --huggingface_repo_visibility, the repository will be public. If the option is omitted or private (or anything other than public) is specified, it will be private.

      • If you specify --save_state and --save_state_to_huggingface, the state will also be uploaded.

      • If you specify --resume and --resume_from_huggingface, the state will be downloaded from HuggingFace and resumed.

        • In this case, the --resume option is --resume {repo_id}/{path_in_repo}:{revision}:{repo_type}. For example: --resume_from_huggingface --resume your-hf-name/your-model/path/test-000002-state:main:model
      • If you specify --async_upload, the upload will be done asynchronously.

      • Added the documentation for applying LoRA to generate with the standard pipeline of Diffusers. training LoRA (Google translate from Japanese)

      • Support for Attention Couple and regional LoRA in gen_img_diffusers.py.

      • If you use AND to separate the prompts, each sub-prompt is sequentially applied to LoRA. --mask_path is treated as a mask image. The number of sub-prompts and the number of LoRA must match.

    • Resolved bug #554

v21.5.0

07 Apr 12:21
9533285
Compare
Choose a tag to compare
  • 2023/04/07 (v21.5.0)
    • Update MacOS and Linux install scripts. Thanks @jstayco
    • Update windows upgrade ps1 and bat
    • Update kohya_ss sd-script code to latest release... this is a big one so it might cause some training issue. If you find that this release is causing issues for you you can go back to the previous release with git checkout v21.4.2 and then run the upgrade script for your platform. Here is the list of changes in the new sd-scripts:
      • There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while.

      • The learning rate and dim (rank) of each block may not work with other modules (LyCORIS, etc.) because the module needs to be changed.

      • Fix some bugs and add some features.

        • Fix an issue that .json format dataset config files cannot be read. issue #351 Thanks to rockerBOO!
        • Raise an error when an invalid --lr_warmup_steps option is specified (when warmup is not valid for the specified scheduler). PR #364 Thanks to shirayu!
        • Add min_snr_gamma to metadata in train_network.py. PR #373 Thanks to rockerBOO!
        • Fix the data type handling in fine_tune.py. This may fix an error that occurs in some environments when using xformers, npz format cache, and mixed_precision.
      • Add options to train_network.py to specify block weights for learning rates. PR #355 Thanks to u-haru for the great contribution!

        • Specify the weights of 25 blocks for the full model.
        • No LoRA corresponds to the first block, but 25 blocks are specified for compatibility with 'LoRA block weight' etc. Also, if you do not expand to conv2d3x3, some blocks do not have LoRA, but please specify 25 values ​​for the argument for consistency.
        • Specify the following arguments with --network_args.
        • down_lr_weight : Specify the learning rate weight of the down blocks of U-Net. The following can be specified.
        • The weight for each block: Specify 12 numbers such as "down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1".
        • Specify from preset: Specify such as "down_lr_weight=sine" (the weights by sine curve). sine, cosine, linear, reverse_linear, zeros can be specified. Also, if you add +number such as "down_lr_weight=cosine+.25", the specified number is added (such as 0.25~1.25).
        • mid_lr_weight : Specify the learning rate weight of the mid block of U-Net. Specify one number such as "down_lr_weight=0.5".
        • up_lr_weight : Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.
        • If you omit the some arguments, the 1.0 is used. Also, if you set the weight to 0, the LoRA modules of that block are not created.
        • block_lr_zero_threshold : If the weight is not more than this value, the LoRA module is not created. The default is 0.
      • Add options to train_network.py to specify block dims (ranks) for variable rank.

        • Specify 25 values ​​for the full model of 25 blocks. Some blocks do not have LoRA, but specify 25 values ​​always.
        • Specify the following arguments with --network_args.
        • block_dims : Specify the dim (rank) of each block. Specify 25 numbers such as "block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2".
        • block_alphas : Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.
        • conv_block_dims : Expand LoRA to Conv2d 3x3 and specify the dim (rank) of each block.
        • conv_block_alphas : Specify the alpha of each block when expanding LoRA to Conv2d 3x3. If omitted, the value of conv_alpha is used.
    • Add GUI support for new features introduced above by kohya_ss. Those will be visible only if the LoRA is of type Standard or kohya LoCon. You will find the new parameters under the Advanced Configuration accordion in the Training parameters tab.
    • Various improvements to linux and macos srtup scripts thanks to @Oceanswave and @derVedro
    • Integrated sd-scripts commits into commit history. Thanks to @Cauldrath

What's Changed

  • gui.sh more location independent, better docker environment detection/handling, setup.sh -u option added to skip launching GUI in docker environment by @jstayco in #535
  • Merge history from kohya-ss/sd-scripts by @Cauldrath in #551
  • Fix for git-based captioning on posix systems by @Oceanswave in #556
  • gui.sh will now be able to handle spaces and other ugly things in the path by @derVedro in #558
  • v21.5.0 by @bmaltais in #561

New Contributors

Full Changelog: v21.4.2...v21.5.0

v21.4.2

02 Apr 23:13
9c8c480
Compare
Choose a tag to compare
  • 2024/04/02 (v21.4.2)
    • removes TensorFlow from requirements.txt for Darwin platforms as pip does not support advanced conditionals like CPU architecture. The logic is now defined in setup.sh to avoid version bump headaches, and the selection logic is in the pre-existing pip function. Additionally, the release includes the addition of the tensorflow-metal package for M1+ Macs, which enables GPU acceleration per Apple's documentation. Thanks @jstayco

v21.4.1

02 Apr 02:26
720dcd7
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v21.4.0...v21.4.1

v21.4.0

01 Apr 20:53
1e645de
Compare
Choose a tag to compare
  • 2024/04/01 (v21.4.0)
    • Improved linux and macos installation and updates script. See README for more details. Many thanks to @jstayco and @Galunid for the great PR!
    • Fix issue with "missing library" error.