Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.3.0 #424

Merged
merged 5 commits into from
Mar 22, 2023
Merged

v2.3.0 #424

merged 5 commits into from
Mar 22, 2023

Conversation

bmaltais
Copy link
Owner

  • 2023/03/22 (v21.3.0)
    • Add a function to load training config with .toml to each training script. Thanks to Linaqruf for this great contribution!
      • Specify .toml file with --config_file. .toml file has key=value entries. Keys are same as command line options. See #241 for details.
      • All sub-sections are combined to a single dictionary (the section names are ignored.)
      • Omitted arguments are the default values for command line arguments.
      • Command line args override the arguments in .toml.
      • With --output_config option, you can output current command line options to the .toml specified with--config_file. Please use as a template.
    • Add --lr_scheduler_type and --lr_scheduler_args arguments for custom LR scheduler to each training script. Thanks to Isotr0py! #271
      • Same as the optimizer.
    • Add sample image generation with weight and no length limit. Thanks to mio2333! #288
      • ( ), (xxxx:1.2) and [ ] can be used.
    • Fix exception on training model in diffusers format with train_network.py Thanks to orenwang! #290
    • Add warning if you are about to overwrite an existing model: Feature Request: Overwrite File Warning #404
    • Add --vae_batch_size for faster latents caching to each training script. This batches VAE calls.
      • Please start with2 or 4 depending on the size of VRAM.
    • Fix a number of training steps with --gradient_accumulation_steps and --max_train_epochs. Thanks to tsukimiya!
    • Extract parser setup to external scripts. Thanks to robertsmieja!
    • Fix an issue without .npz and with --full_path in training.
    • Support extensions with upper cases for images for not Windows environment.
    • Fix resize_lora.py to work with LoRA with dynamic rank (including conv_dim != network_dim). Thanks to toshiaki!
    • Fix issue: Subfolders name issue in Linux pod #406
    • Add device support to LoRA extract.

@bmaltais bmaltais merged commit 838478b into master Mar 22, 2023
bmaltais pushed a commit that referenced this pull request Apr 18, 2023
recursive support for finetune scripts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant