Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v21.8.6 #1332

Merged
merged 70 commits into from
Aug 5, 2023
Merged

v21.8.6 #1332

merged 70 commits into from
Aug 5, 2023

Conversation

bmaltais
Copy link
Owner

@bmaltais bmaltais commented Aug 5, 2023

  • 2023/08/05 (v21.8.6)
    • Merge latest sd-scripts updates.
    • Allow DB training on SDXL models. Unsupported but appear to work.
    • Fix finetuning latent caching issue when doing SDXL models in fp16
    • Add SDXL merge lora support. You can now merge LoRAs into an SDXL checkpoint.
    • Add SDPA CrossAttention option to trainers.
    • Merge latest kohya_ss sd-scripts code
    • Fix Dreambooth support for SDXL training
    • Update to latest bitsandbytes release. New optional install option for bitsandbytes versions.

pamparamm and others added 30 commits July 17, 2023 17:51
…on8bit) (#631)

* ADD libbitsandbytes.dll for 0.38.1

* Delete libbitsandbytes_cuda116.dll

* Delete cextension.py

* add main.py

* Update requirements.txt for bitsandbytes 0.38.1

* Update README.md for bitsandbytes-windows

* Update README-ja.md  for bitsandbytes 0.38.1

* Update main.py for return cuda118

* Update train_util.py for lion8bit

* Update train_README-ja.md for lion8bit

* Update train_util.py for add DAdaptAdan and DAdaptSGD

* Update train_util.py for DAdaptadam

* Update train_network.py for dadapt

* Update train_README-ja.md for DAdapt

* Update train_util.py for DAdapt

* Update train_network.py for DAdaptAdaGrad

* Update train_db.py for DAdapt

* Update fine_tune.py for DAdapt

* Update train_textual_inversion.py for DAdapt

* Update train_textual_inversion_XTI.py for DAdapt

* Revert "Merge branch 'qinglong' into main"

This reverts commit b65c023083d6d1e8a30eb42eddd603d1aac97650, reversing
changes made to f6fda20caf5e773d56bcfb5c4575c650bb85362b.

* Revert "Update requirements.txt for bitsandbytes 0.38.1"

This reverts commit 83abc60dfaddb26845f54228425b98dd67997528.

* Revert "Delete cextension.py"

This reverts commit 3ba4dfe046874393f2a022a4cbef3628ada35391.

* Revert "Update README.md for bitsandbytes-windows"

This reverts commit 4642c52086b5e9791233007e2fdfd97f832cd897.

* Revert "Update README-ja.md  for bitsandbytes 0.38.1"

This reverts commit fa6d7485ac067ebc49e6f381afdb8dd2f12caa8f.

* Update train_util.py

* Update requirements.txt

* support PagedAdamW8bit/PagedLion8bit

* Update requirements.txt

* update for PageAdamW8bit and PagedLion8bit

* Revert

* revert main
Support for bitsandbytes 0.39.1 with Paged Optimizer
support ckpt without position id in sd v1 #687
@bmaltais bmaltais merged commit ad5f2e7 into master Aug 5, 2023
0 of 2 checks passed
wkpark added a commit to wkpark/sd-scripts that referenced this pull request Nov 8, 2023
wkpark added a commit to wkpark/sd-scripts that referenced this pull request Nov 9, 2023
wkpark added a commit to wkpark/sd-scripts that referenced this pull request Nov 9, 2023
donhardman added a commit to donhardman/sd-scripts that referenced this pull request Dec 3, 2023
* Add custom seperator

* Fix typo

* Fix typo again

* Fix min-snr-gamma for v-prediction and ZSNR.

This fixes min-snr for vpred+zsnr by dividing directly by SNR+1.
The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr

* use **kwargs and change svd() calling convention to make svd() reusable

 * add required attributes to model_org, model_tuned, save_to
 * set "*_alpha" using str(float(foo))

* add min_diff, clamp_quantile args

based on bmaltais/kohya_ss#1332 bmaltais/kohya_ss@a9ec90c

* add caption_separator option

* add Deep Shrink

* add gradual latent

* Update README.md

* format by black, add ja comment

* make separate U-Net for inference

* make slicing vae compatible with latest diffusers

* fix gradual latent cannot be disabled

* apply unsharp mask

* add unsharp mask

* fix strength error

---------

Co-authored-by: Kohaku-Blueleaf <[email protected]>
Co-authored-by: feffy380 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Kohya S <[email protected]>
Co-authored-by: Kohya S <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants