Skip to content

Commit

Permalink
Merge pull request #865 from bmaltais/dev2
Browse files Browse the repository at this point in the history
v21.5.14
  • Loading branch information
bmaltais committed May 28, 2023
2 parents 30b054b + 6bb8ec7 commit 47b55a4
Show file tree
Hide file tree
Showing 14 changed files with 443 additions and 86 deletions.
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -345,15 +345,25 @@ This will store a backup file with your current locally installed pip packages a

## Change History

* 2023/07/15 (v21.5.12)
* 2023/05/28 (v21.5.14)
- Add Create Groupo tool and GUI
* 2023/05/24 (v21.5.13)
- Upgrade gradio release to fix issue with UI refresh on config load.
- [D-Adaptation v3.0](https://github.com/facebookresearch/dadaptation) is now supported. [PR #530](https://github.com/kohya-ss/sd-scripts/pull/530) Thanks to sdbds!
- `--optimizer_type` now accepts `DAdaptAdamPreprint`, `DAdaptAdanIP`, and `DAdaptLion`.
- `DAdaptAdam` is now new. The old `DAdaptAdam` is available with `DAdaptAdamPreprint`.
- Simply specifying `DAdaptation` will use `DAdaptAdamPreprint` (same behavior as before).
- You need to install D-Adaptation v3.0. After activating venv, please do `pip install -U dadaptation`.
- See PR and D-Adaptation documentation for details.
* 2023/05/22 (v21.5.12)
- Fixed several bugs.
- The state is saved even when the `--save_state` option is not specified in `fine_tune.py` and `train_db.py`. [PR #521](https://github.com/kohya-ss/sd-scripts/pull/521) Thanks to akshaal!
- Cannot load LoRA without `alpha`. [PR #527](https://github.com/kohya-ss/sd-scripts/pull/527) Thanks to Manjiz!
- Minor changes to console output during sample generation. [PR #515](https://github.com/kohya-ss/sd-scripts/pull/515) Thanks to yanhuifair!
- The generation script now uses xformers for VAE as well.
- Fixed an issue where an error would occur if the encoding of the prompt file was different from the default. [PR #510](https://github.com/kohya-ss/sd-scripts/pull/510) Thanks to sdbds!
- Please save the prompt file in UTF-8.
* 2023/07/15 (v21.5.11)
* 2023/05/15 (v21.5.11)
- Added an option `--dim_from_weights` to `train_network.py` to automatically determine the dim(rank) from the weight file. [PR #491](https://github.com/kohya-ss/sd-scripts/pull/491) Thanks to AI-Casanova!
- It is useful in combination with `resize_lora.py`. Please see the PR for details.
- Fixed a bug where the noise resolution was incorrect with Multires noise. [PR #489](https://github.com/kohya-ss/sd-scripts/pull/489) Thanks to sdbds!
Expand Down
5 changes: 4 additions & 1 deletion docs/train_README-ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -615,9 +615,12 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- Lion8bit : 引数は同上
- SGDNesterov : [torch.optim.SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html), nesterov=True
- SGDNesterov8bit : 引数は同上
- DAdaptation(DAdaptAdam) : https://github.com/facebookresearch/dadaptation
- DAdaptation(DAdaptAdamPreprint) : https://github.com/facebookresearch/dadaptation
- DAdaptAdam : 引数は同上
- DAdaptAdaGrad : 引数は同上
- DAdaptAdan : 引数は同上
- DAdaptAdanIP : 引数は同上
- DAdaptLion : 引数は同上
- DAdaptSGD : 引数は同上
- AdaFactor : [Transformers AdaFactor](https://huggingface.co/docs/transformers/main_classes/optimizer_schedules)
- 任意のオプティマイザ
Expand Down
10 changes: 8 additions & 2 deletions docs/train_README-zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -550,8 +550,14 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- Lion : https://github.com/lucidrains/lion-pytorch
- 与过去版本中指定的 --use_lion_optimizer 相同
- SGDNesterov : [torch.optim.SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html), nesterov=True
- SGDNesterov8bit : 引数同上
- DAdaptation : https://github.com/facebookresearch/dadaptation
- SGDNesterov8bit : 参数同上
- DAdaptation(DAdaptAdamPreprint) : https://github.com/facebookresearch/dadaptation
- DAdaptAdam : 参数同上
- DAdaptAdaGrad : 参数同上
- DAdaptAdan : 参数同上
- DAdaptAdanIP : 引数は同上
- DAdaptLion : 参数同上
- DAdaptSGD : 参数同上
- AdaFactor : [Transformers AdaFactor](https://huggingface.co/docs/transformers/main_classes/optimizer_schedules)
- 任何优化器

Expand Down
6 changes: 4 additions & 2 deletions docs/train_network_README-ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,9 @@ python networks\merge_lora.py --sd_model ..\model\model.ckpt

### 複数のLoRAのモデルをマージする

複数のLoRAモデルをひとつずつSDモデルに適用する場合と、複数のLoRAモデルをマージしてからSDモデルにマージする場合とは、計算順序の関連で微妙に異なる結果になります。
__複数のLoRAをマージする場合は原則として `svd_merge_lora.py` を使用してください。__ 単純なup同士やdown同士のマージでは、計算結果が正しくなくなるためです。

`merge_lora.py` によるマージは差分抽出法でLoRAを生成する場合等、ごく限られた場合でのみ有効です。

たとえば以下のようなコマンドラインになります。

Expand All @@ -294,7 +296,7 @@ python networks\merge_lora.py

--ratiosにそれぞれのモデルの比率(どのくらい重みを元モデルに反映するか)を0~1.0の数値で指定します。二つのモデルを一対一でマージす場合は、「0.5 0.5」になります。「1.0 1.0」では合計の重みが大きくなりすぎて、恐らく結果はあまり望ましくないものになると思われます。

v1で学習したLoRAとv2で学習したLoRA、rank(次元数)``alpha``の異なるLoRAはマージできません。U-NetだけのLoRAとU-Net+Text EncoderのLoRAはマージできるはずですが、結果は未知数です。
v1で学習したLoRAとv2で学習したLoRA、rank(次元数)の異なるLoRAはマージできません。U-NetだけのLoRAとU-Net+Text EncoderのLoRAはマージできるはずですが、結果は未知数です。


### その他のオプション
Expand Down
13 changes: 7 additions & 6 deletions kohya_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,12 +118,13 @@ def UI(**kwargs):
enable_copy_info_button=True,
headless=headless,
)
gradio_extract_dylora_tab(headless=headless)
gradio_extract_lora_tab(headless=headless)
gradio_extract_lycoris_locon_tab(headless=headless)
gradio_merge_lora_tab(headless=headless)
gradio_merge_lycoris_tab(headless=headless)
gradio_resize_lora_tab(headless=headless)
with gr.Tab('LoRA'):
gradio_extract_dylora_tab(headless=headless)
gradio_extract_lora_tab(headless=headless)
gradio_extract_lycoris_locon_tab(headless=headless)
gradio_merge_lora_tab(headless=headless)
gradio_merge_lycoris_tab(headless=headless)
gradio_resize_lora_tab(headless=headless)

# Show the interface
launch_kwargs = {}
Expand Down
4 changes: 4 additions & 0 deletions library/common_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -786,7 +786,11 @@ def gradio_training(
'Adafactor',
'DAdaptation',
'DAdaptAdaGrad',
'DAdaptAdam',
'DAdaptAdan',
'DAdaptAdanIP',
'DAdaptAdamPreprint',
'DAdaptLion',
'DAdaptSGD',
'Lion',
'Lion8bit',
Expand Down
110 changes: 110 additions & 0 deletions library/group_images_gui.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import gradio as gr
from easygui import msgbox
import subprocess
from .common_gui import get_folder_path
import os

PYTHON = 'python3' if os.name == 'posix' else './venv/Scripts/python.exe'

def group_images(
input_folder,
output_folder,
group_size,
include_subfolders,
do_not_copy_other_files
):
if input_folder == '':
msgbox('Input folder is missing...')
return

if output_folder == '':
msgbox('Please provide an output folder.')
return

print(f'Grouping images in {input_folder}...')

run_cmd = f'{PYTHON} "{os.path.join("tools","group_images.py")}"'
run_cmd += f' "{input_folder}"'
run_cmd += f' "{output_folder}"'
run_cmd += f' {(group_size)}'
if include_subfolders:
run_cmd += f' --include_subfolders'
if do_not_copy_other_files:
run_cmd += f' --do_not_copy_other_files'

print(run_cmd)

if os.name == 'posix':
os.system(run_cmd)
else:
subprocess.run(run_cmd)

print('...grouping done')


def gradio_group_images_gui_tab(headless=False):
with gr.Tab('Group Images'):
gr.Markdown('This utility will group images in a folder based on their aspect ratio.')

with gr.Row():
input_folder = gr.Textbox(
label='Input folder',
placeholder='Directory containing the images to group',
interactive=True,
)
button_input_folder = gr.Button(
'📂', elem_id='open_folder_small', visible=(not headless)
)
button_input_folder.click(
get_folder_path,
outputs=input_folder,
show_progress=False,
)

output_folder = gr.Textbox(
label='Output folder',
placeholder='Directory where the grouped images will be stored',
interactive=True,
)
button_output_folder = gr.Button(
'📂', elem_id='open_folder_small', visible=(not headless)
)
button_output_folder.click(
get_folder_path,
outputs=output_folder,
show_progress=False,
)
with gr.Row():
group_size = gr.Slider(
label='Group size',
info='Number of images to group together',
value='4',
minimum=1, maximum=64, step=1,
interactive=True,
)

include_subfolders = gr.Checkbox(
label='Include Subfolders',
value=False,
info='Include images in subfolders as well',
)

do_not_copy_other_files = gr.Checkbox(
label='Do not copy other files',
value=False,
info='Do not copy other files in the input folder to the output folder',
)

group_images_button = gr.Button('Group images')

group_images_button.click(
group_images,
inputs=[
input_folder,
output_folder,
group_size,
include_subfolders,
do_not_copy_other_files
],
show_progress=False,
)
20 changes: 15 additions & 5 deletions library/train_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -1940,7 +1940,7 @@ def add_optimizer_arguments(parser: argparse.ArgumentParser):
"--optimizer_type",
type=str,
default="",
help="Optimizer to use / オプティマイザの種類: AdamW (default), AdamW8bit, Lion8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation(DAdaptAdam), DAdaptAdaGrad, DAdaptAdan, DAdaptSGD, AdaFactor",
help="Optimizer to use / オプティマイザの種類: AdamW (default), AdamW8bit, Lion8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation(DAdaptAdamPreprint), DAdaptAdaGrad, DAdaptAdam, DAdaptAdan, DAdaptAdanIP, DAdaptLion, DAdaptSGD, AdaFactor",
)

# backward compatibility
Expand Down Expand Up @@ -2545,7 +2545,7 @@ def task():


def get_optimizer(args, trainable_params):
# "Optimizer to use: AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, Lion8bit, DAdaptation, DAdaptation(DAdaptAdam), DAdaptAdaGrad, DAdaptAdan, DAdaptSGD, Adafactor"
# "Optimizer to use: AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, Lion8bit, DAdaptation(DAdaptAdamPreprint), DAdaptAdaGrad, DAdaptAdam, DAdaptAdan, DAdaptAdanIP, DAdaptLion, DAdaptSGD, Adafactor"

optimizer_type = args.optimizer_type
if args.use_8bit_adam:
Expand Down Expand Up @@ -2653,6 +2653,7 @@ def get_optimizer(args, trainable_params):
# check dadaptation is installed
try:
import dadaptation
import dadaptation.experimental as experimental
except ImportError:
raise ImportError("No dadaptation / dadaptation がインストールされていないようです")

Expand All @@ -2677,15 +2678,24 @@ def get_optimizer(args, trainable_params):
)

# set optimizer
if optimizer_type == "DAdaptation".lower() or optimizer_type == "DAdaptAdam".lower():
optimizer_class = dadaptation.DAdaptAdam
print(f"use D-Adaptation Adam optimizer | {optimizer_kwargs}")
if optimizer_type == "DAdaptation".lower() or optimizer_type == "DAdaptAdamPreprint".lower():
optimizer_class = experimental.DAdaptAdamPreprint
print(f"use D-Adaptation AdamPreprint optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptAdaGrad".lower():
optimizer_class = dadaptation.DAdaptAdaGrad
print(f"use D-Adaptation AdaGrad optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptAdam".lower():
optimizer_class = dadaptation.DAdaptAdam
print(f"use D-Adaptation Adam optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptAdan".lower():
optimizer_class = dadaptation.DAdaptAdan
print(f"use D-Adaptation Adan optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptAdanIP".lower():
optimizer_class = experimental.DAdaptAdanIP
print(f"use D-Adaptation AdanIP optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptLion".lower():
optimizer_class = dadaptation.DAdaptLion
print(f"use D-Adaptation Lion optimizer | {optimizer_kwargs}")
elif optimizer_type == "DAdaptSGD".lower():
optimizer_class = dadaptation.DAdaptSGD
print(f"use D-Adaptation SGD optimizer | {optimizer_kwargs}")
Expand Down
2 changes: 2 additions & 0 deletions library/utilities.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from library.blip_caption_gui import gradio_blip_caption_gui_tab
from library.git_caption_gui import gradio_git_caption_gui_tab
from library.wd14_caption_gui import gradio_wd14_caption_gui_tab
from library.group_images_gui import gradio_group_images_gui_tab


def utilities_tab(
Expand All @@ -28,6 +29,7 @@ def utilities_tab(
gradio_git_caption_gui_tab(headless=headless)
gradio_wd14_caption_gui_tab(headless=headless)
gradio_convert_model_tab(headless=headless)
gradio_group_images_gui_tab(headless=headless)

return (
train_data_dir_input,
Expand Down
8 changes: 7 additions & 1 deletion lora_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -558,6 +558,12 @@ def train_model(
)
reg_factor = 2

print(f'Total steps: {total_steps}')
print(f'Train batch size: {train_batch_size}')
print(f'Gradient accumulation steps: {gradient_accumulation_steps}')
print(f'Epoch: {epoch}')
print(f'Regulatization factor: {reg_factor}')

# calculate max_train_steps
max_train_steps = int(
math.ceil(
Expand All @@ -568,7 +574,7 @@ def train_model(
* int(reg_factor)
)
)
print(f'max_train_steps = {max_train_steps}')
print(f'max_train_steps ({total_steps} / {train_batch_size} / {gradient_accumulation_steps} * {epoch} * {reg_factor}) = {max_train_steps}')

# calculate stop encoder training
if stop_text_encoder_training_pct == None:
Expand Down
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ altair==4.2.2
# This next line is not an error but rather there to properly catch if the url based bitsandbytes was properly installed by the line above...
bitsandbytes==0.35.0; sys_platform == 'win32'
bitsandbytes==0.38.1; (sys_platform == "darwin" or sys_platform == "linux")
dadaptation==1.5
dadaptation==3.1
diffusers[torch]==0.10.2
easygui==0.98.3
einops==0.6.0
ftfy==6.1.1
gradio==3.28.1; sys_platform != 'darwin'
gradio==3.32.0; sys_platform != 'darwin'
gradio==3.23.0; sys_platform == 'darwin'
lion-pytorch==0.0.6
opencv-python==4.7.0.68
Expand Down
2 changes: 1 addition & 1 deletion setup.bat
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ if %choice%==1 (
pip install --use-pep517 --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
) else (
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install --use-pep517 --upgrade -r requirements.txt
pip install --upgrade xformers==0.0.19
rem pip install -U -I --no-deps https://files.pythonhosted.org/packages/d6/f7/02662286419a2652c899e2b3d1913c47723fc164b4ac06a85f769c291013/xformers-0.0.17rc482-cp310-cp310-win_amd64.whl
Expand Down
Loading

0 comments on commit 47b55a4

Please sign in to comment.