Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
.toml
to each training script. Thanks to Linaqruf for this great contribution!.toml
file with--config_file
..toml
file haskey=value
entries. Keys are same as command line options. See #241 for details..toml
.--output_config
option, you can output current command line options to the.toml
specified with--config_file
. Please use as a template.--lr_scheduler_type
and--lr_scheduler_args
arguments for custom LR scheduler to each training script. Thanks to Isotr0py! #271( )
,(xxxx:1.2)
and[ ]
can be used.train_network.py
Thanks to orenwang! #290--vae_batch_size
for faster latents caching to each training script. This batches VAE calls.2
or4
depending on the size of VRAM.--gradient_accumulation_steps
and--max_train_epochs
. Thanks to tsukimiya!.npz
and with--full_path
in training.resize_lora.py
to work with LoRA with dynamic rank (includingconv_dim != network_dim
). Thanks to toshiaki!