-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v21.5.0 #561
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show the moving average loss
Add noise offset to metadata
Add '--lowram' argument
fix git path
Add optimizer to metadata
Refactor code to make it easier to add new optimizers, and support alternate optimizer parameters -move redundant code to train_util for initializing optimizers - add SGD Nesterov optimizers as option (since they are already available) - add new parameters which may be helpful for tuning existing and new optimizers
fix gen not working
Fix device issue in load_file, reduce vram usage
fix for merge_lora.py
Improve container environment detection, improve library linking in containers, ensure we exit after calling gui.sh to avoid any conditions where code continues running.
gui.sh more location independent, better docker environment detection/handling, setup.sh -u option added to skip launching GUI in docker environment
…b90c8b30636f5b1b7f6c934653df277d9'
Merge history from kohya-ss/sd-scripts
Fix for git-based captioning on posix systems
the outer quotes are unnecessary, brackets do the job
gui.sh will now be able to handle spaces and other ugly things in the path
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
git checkout v21.4.2
and then run the upgrade script for your platform. Here is the list of changes in the new sd-scripts:There may be bugs because I changed a lot. If you cannot revert the script to the previous version when a problem occurs, please wait for the update for a while.
The learning rate and dim (rank) of each block may not work with other modules (LyCORIS, etc.) because the module needs to be changed.
Fix some bugs and add some features.
.json
format dataset config files cannot be read. issue #351 Thanks to rockerBOO!--lr_warmup_steps
option is specified (when warmup is not valid for the specified scheduler). PR #364 Thanks to shirayu!min_snr_gamma
to metadata intrain_network.py
. PR #373 Thanks to rockerBOO!fine_tune.py
. This may fix an error that occurs in some environments when using xformers, npz format cache, and mixed_precision.Add options to
train_network.py
to specify block weights for learning rates. PR #355 Thanks to u-haru for the great contribution!--network_args
.down_lr_weight
: Specify the learning rate weight of the down blocks of U-Net. The following can be specified."down_lr_weight=0,0,0,0,0,0,1,1,1,1,1,1"
."down_lr_weight=sine"
(the weights by sine curve). sine, cosine, linear, reverse_linear, zeros can be specified. Also, if you add+number
such as"down_lr_weight=cosine+.25"
, the specified number is added (such as 0.25~1.25).mid_lr_weight
: Specify the learning rate weight of the mid block of U-Net. Specify one number such as"down_lr_weight=0.5"
.up_lr_weight
: Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.block_lr_zero_threshold
: If the weight is not more than this value, the LoRA module is not created. The default is 0.Add options to
train_network.py
to specify block dims (ranks) for variable rank.--network_args
.block_dims
: Specify the dim (rank) of each block. Specify 25 numbers such as"block_dims=2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2"
.block_alphas
: Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.conv_block_dims
: Expand LoRA to Conv2d 3x3 and specify the dim (rank) of each block.conv_block_alphas
: Specify the alpha of each block when expanding LoRA to Conv2d 3x3. If omitted, the value of conv_alpha is used.Standard
orkohya LoCon
. You will find the new parameters under theAdvanced Configuration
accordion in theTraining parameters
tab.