Skip to content

Releases: bmaltais/kohya_ss

v21.3.9

01 Apr 11:01
af0df34
Compare
Choose a tag to compare
  • 2023/04/01 (v21.3.9)
    • Update how setup is done on Windows by introducing a setup.bat script. This will make it easier to install/re-install on Windows if needed. Many thanks to @missionfloyd for his PR: #496
    • Fix issue with WD14 caption script by applying a custom fix to kohya_ss code.

v21.3.8

30 Mar 11:24
c001c80
Compare
Choose a tag to compare
  • 2023/03/30 (v21.3.8)
    • Fix issue with LyCORIS version not being found: #481

v21.3.7

29 Mar 23:44
f580520
Compare
Choose a tag to compare
  • 2023/03/29 (v21.3.7)
    • Allow for 0.1 increment in Network and Conv alpha values: #471 Thanks to @srndpty
    • Updated Lycoris module version

v21.3.6

28 Mar 15:55
79bffae
Compare
Choose a tag to compare
  • 2023/03/28 (v21.3.6)
    • Fix issues when --persistent_data_loader_workers is specified.
      • The batch members of the bucket are not shuffled.
      • --caption_dropout_every_n_epochs does not work.
      • These issues occurred because the epoch transition was not recognized correctly. Thanks to u-haru for reporting the issue.
    • Fix an issue that images are loaded twice in Windows environment.
    • Add Min-SNR Weighting strategy. Details are in #308. Thank you to AI-Casanova for this great work!
      • Add --min_snr_gamma option to training scripts, 5 is recommended by paper.
      • The Min SNR gamma fiels can be found unser the advanced training tab in all trainers.
    • Fixed the error while images are ended with capital image extensions. Thanks to @kvzn. #454

v21.3.5

26 Mar 10:49
14bd126
Compare
Choose a tag to compare

2023/03/26 (v21.3.5)
Fix for public #230
Added detection for Google Colab to not bring up the GUI file/folder window on the platform. Instead it will only use the file/folder path provided in the input field.
Fix missing requirements_macos.txt file

v21.3.4

25 Mar 16:44
070c7eb
Compare
Choose a tag to compare
  • 2023/03/25 (v21.3.4)

    Let me know how this work. From the look of it it appear to be well tought out. I modified a few things to make it fit better with the rest of the code in the repo.

    • Fix for issue #433 by implementing default of 0.
    • Removed non applicable save_model_as choices for LoRA and TI.

v21.3.3

24 Mar 17:36
13d82d3
Compare
Choose a tag to compare
  • 2023/03/24 (v21.3.3)
    • Add support for custom user gui files. They will be created at installation time or when upgrading if missing. You will see two files in the root of the folder. One named .\gui-user.bat and the other .\gui-user.ps1. Edit the file based on your prefered terminal. Simply add the parameters you want to pass the gui in there and execute it to start the gui with them. Enjoy!

To get a full list of parameters run: .\gui.bat -h or .\gui.ps1 -h

  • 2023/03/23 (v21.3.2)
    • Fix issue reported: #439

v21.3.1

23 Mar 11:14
6a578be
Compare
Choose a tag to compare
  • 2023/03/23 (v21.3.1)
    • Merge PR to fix refactor naming issue for basic captions. Thank @zrma

v21.3.0

22 Mar 16:57
838478b
Compare
Choose a tag to compare
  • 2023/03/22 (v21.3.0)
    • Add a function to load training config with .toml to each training script. Thanks to Linaqruf for this great contribution!
      • Specify .toml file with --config_file. .toml file has key=value entries. Keys are same as command line options. See #241 for details.
      • All sub-sections are combined to a single dictionary (the section names are ignored.)
      • Omitted arguments are the default values for command line arguments.
      • Command line args override the arguments in .toml.
      • With --output_config option, you can output current command line options to the .toml specified with--config_file. Please use as a template.
    • Add --lr_scheduler_type and --lr_scheduler_args arguments for custom LR scheduler to each training script. Thanks to Isotr0py! #271
      • Same as the optimizer.
    • Add sample image generation with weight and no length limit. Thanks to mio2333! #288
      • ( ), (xxxx:1.2) and [ ] can be used.
    • Fix exception on training model in diffusers format with train_network.py Thanks to orenwang! #290
    • Add warning if you are about to overwrite an existing model: #404
    • Add --vae_batch_size for faster latents caching to each training script. This batches VAE calls.
      • Please start with2 or 4 depending on the size of VRAM.
    • Fix a number of training steps with --gradient_accumulation_steps and --max_train_epochs. Thanks to tsukimiya!
    • Extract parser setup to external scripts. Thanks to robertsmieja!
    • Fix an issue without .npz and with --full_path in training.
    • Support extensions with upper cases for images for not Windows environment.
    • Fix resize_lora.py to work with LoRA with dynamic rank (including conv_dim != network_dim). Thanks to toshiaki!
    • Fix issue: #406
    • Add device support to LoRA extract.

v21.2.5

20 Mar 00:08
1ac6892
Compare
Choose a tag to compare
  • 2023/03/19 (v21.2.5):
    • Fix basic captioning logic
    • Add possibility to not train TE in Dreamboot by setting Step text encoder training to -1.
    • Update linux scripts