-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v21.8.6 #1332
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Owner
bmaltais
commented
Aug 5, 2023
- 2023/08/05 (v21.8.6)
- Merge latest sd-scripts updates.
- Allow DB training on SDXL models. Unsupported but appear to work.
- Fix finetuning latent caching issue when doing SDXL models in fp16
- Add SDXL merge lora support. You can now merge LoRAs into an SDXL checkpoint.
- Add SDPA CrossAttention option to trainers.
- Merge latest kohya_ss sd-scripts code
- Fix Dreambooth support for SDXL training
- Update to latest bitsandbytes release. New optional install option for bitsandbytes versions.
…on8bit) (#631) * ADD libbitsandbytes.dll for 0.38.1 * Delete libbitsandbytes_cuda116.dll * Delete cextension.py * add main.py * Update requirements.txt for bitsandbytes 0.38.1 * Update README.md for bitsandbytes-windows * Update README-ja.md for bitsandbytes 0.38.1 * Update main.py for return cuda118 * Update train_util.py for lion8bit * Update train_README-ja.md for lion8bit * Update train_util.py for add DAdaptAdan and DAdaptSGD * Update train_util.py for DAdaptadam * Update train_network.py for dadapt * Update train_README-ja.md for DAdapt * Update train_util.py for DAdapt * Update train_network.py for DAdaptAdaGrad * Update train_db.py for DAdapt * Update fine_tune.py for DAdapt * Update train_textual_inversion.py for DAdapt * Update train_textual_inversion_XTI.py for DAdapt * Revert "Merge branch 'qinglong' into main" This reverts commit b65c023083d6d1e8a30eb42eddd603d1aac97650, reversing changes made to f6fda20caf5e773d56bcfb5c4575c650bb85362b. * Revert "Update requirements.txt for bitsandbytes 0.38.1" This reverts commit 83abc60dfaddb26845f54228425b98dd67997528. * Revert "Delete cextension.py" This reverts commit 3ba4dfe046874393f2a022a4cbef3628ada35391. * Revert "Update README.md for bitsandbytes-windows" This reverts commit 4642c52086b5e9791233007e2fdfd97f832cd897. * Revert "Update README-ja.md for bitsandbytes 0.38.1" This reverts commit fa6d7485ac067ebc49e6f381afdb8dd2f12caa8f. * Update train_util.py * Update requirements.txt * support PagedAdamW8bit/PagedLion8bit * Update requirements.txt * update for PageAdamW8bit and PagedLion8bit * Revert * revert main
Max norm adjust
Support for bitsandbytes 0.39.1 with Paged Optimizer
support ckpt without position id in sd v1 #687
fix typo
pool output fix, v_pred loss like etc.
fix sdxl_gen_img not working
fix training textencoder in sdxl not working
wkpark
added a commit
to wkpark/sd-scripts
that referenced
this pull request
Nov 8, 2023
Merged
3 tasks
wkpark
added a commit
to wkpark/sd-scripts
that referenced
this pull request
Nov 9, 2023
wkpark
added a commit
to wkpark/sd-scripts
that referenced
this pull request
Nov 9, 2023
donhardman
added a commit
to donhardman/sd-scripts
that referenced
this pull request
Dec 3, 2023
* Add custom seperator * Fix typo * Fix typo again * Fix min-snr-gamma for v-prediction and ZSNR. This fixes min-snr for vpred+zsnr by dividing directly by SNR+1. The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr * use **kwargs and change svd() calling convention to make svd() reusable * add required attributes to model_org, model_tuned, save_to * set "*_alpha" using str(float(foo)) * add min_diff, clamp_quantile args based on bmaltais/kohya_ss#1332 bmaltais/kohya_ss@a9ec90c * add caption_separator option * add Deep Shrink * add gradual latent * Update README.md * format by black, add ja comment * make separate U-Net for inference * make slicing vae compatible with latest diffusers * fix gradual latent cannot be disabled * apply unsharp mask * add unsharp mask * fix strength error --------- Co-authored-by: Kohaku-Blueleaf <[email protected]> Co-authored-by: feffy380 <[email protected]> Co-authored-by: Won-Kyu Park <[email protected]> Co-authored-by: Kohya S <[email protected]> Co-authored-by: Kohya S <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.