Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward) #14097

Open
1 task done
kkget opened this issue Nov 25, 2023 · 19 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@kkget
Copy link

kkget commented Nov 25, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Steps to reproduce the problem

normal use

What should have happened?

use Animatediff and controlnet

Sysinfo

python: 3.10.11  •  torch: 2.0.0+cu118  •  xformers: 0.0.17  •  gradio: 3.41.2

What browsers do you use to access the UI ?

No response

Console logs

Startup time: 46.8s (prepare environment: 25.9s, import torch: 4.7s, import gradio: 1.0s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.5s, setup codeformer: 0.3s, load scripts: 8.4s, create ui: 4.1s, gradio launch: 0.6s, app_started_callback: 0.5s).
Loading VAE weights specified in settings: E:\sd-webui-aki\sd-webui-aki-v4\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 6.5s (load weights from disk: 0.6s, create model: 0.9s, apply weights to model: 4.3s, load VAE: 0.4s, calculate empty prompt: 0.1s).
refresh_ui
Restoring base VAE
Applying attention optimization: xformers... done.
VAE weights loaded.
2023-11-25 18:37:19,315 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [cfd03158]
2023-11-25 18:37:19,995 - ControlNet - INFO - Loaded state_dict from [E:\sd-webui-aki\sd-webui-aki-v4\models\ControlNet\control_v11f1p_sd15_depth.pth]
2023-11-25 18:37:19,996 - ControlNet - INFO - controlnet_default_config
2023-11-25 18:37:22,842 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
2023-11-25 18:37:23,008 - ControlNet - INFO - Loading preprocessor: depth
2023-11-25 18:37:23,010 - ControlNet - INFO - preprocessor resolution = 896
2023-11-25 18:37:27,343 - ControlNet - INFO - ControlNet Hooked - Time = 8.458001852035522

0: 640x384 1 face, 78.0ms
Speed: 4.0ms preprocess, 78.0ms inference, 29.0ms postprocess per image at shape (1, 3, 640, 384)
2023-11-25 18:37:50,189 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-11-25 18:37:50,192 - ControlNet - INFO - Loading preprocessor: depth
2023-11-25 18:37:50,192 - ControlNet - INFO - preprocessor resolution = 896
2023-11-25 18:37:50,279 - ControlNet - INFO - ControlNet Hooked - Time = 0.22900152206420898
2023-11-25 18:38:30,791 - AnimateDiff - INFO - AnimateDiff process start.
2023-11-25 18:38:30,791 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\model\mm_sd_v15_v2.ckpt
2023-11-25 18:38:31,574 - AnimateDiff - INFO - Guessed mm_sd_v15_v2.ckpt architecture: MotionModuleType.AnimateDiffV2
2023-11-25 18:38:33,296 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-11-25 18:38:34,243 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2023-11-25 18:38:34,245 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2023-11-25 18:38:34,245 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2023-11-25 18:38:34,246 - AnimateDiff - INFO - Setting DDIM alpha.
2023-11-25 18:38:34,254 - AnimateDiff - INFO - Injection finished.
2023-11-25 18:38:34,254 - AnimateDiff - INFO - Hacking loral to support motion lora
2023-11-25 18:38:34,254 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
2023-11-25 18:38:34,254 - AnimateDiff - INFO - Hacking ControlNet.
*** Error completing request
*** Arguments: ('task(8jna2axn6nwg2d4)', '1 sex girl, big breasts, solo, high heels, skirt, thigh strap, squatting, black footwear, long hair, closed eyes, multicolored hair, red hair, black shirt, sleeveless, black skirt, full body, shirt, lips, brown hair, black hair, sleeveless shirt, bare shoulders, crop top, midriff, grey background, simple background, ', 'bad hands, normal quality, ((monochrome)), ((grayscale)), ((strabismus)), ng_deepnegative_v1_75t, (bad-hands-5:1.3), (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad_prompt, badhandv4, EasyNegative, ', [], 20, 'Euler a', 1, 1, 7, 1600, 896, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000024A8CCB4670>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, <animatediff_utils.py.AnimateDiffProcess object at 0x0000024A8CC58940>, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000024A3BD73F10>, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [cfd03158]', weight=1, image={'image': array([[[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        ...,
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, '🔄', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 21, in process_images
        res = original_function(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 77, in hacked_processing_process_images_hijack
        assert global_input_frames, 'No input images found for ControlNet module'
    AssertionError: No input images found for ControlNet module
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

---
2023-11-25 18:39:52,611 - AnimateDiff - INFO - AnimateDiff process start.
2023-11-25 18:39:52,611 - AnimateDiff - INFO - Motion module already injected. Trying to restore.
2023-11-25 18:39:52,612 - AnimateDiff - INFO - Restoring DDIM alpha.
2023-11-25 18:39:52,612 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2023-11-25 18:39:52,612 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-11-25 18:39:52,612 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet middle block.
2023-11-25 18:39:52,612 - AnimateDiff - INFO - Removal finished.
2023-11-25 18:39:52,650 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2023-11-25 18:39:52,650 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2023-11-25 18:39:52,650 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2023-11-25 18:39:52,650 - AnimateDiff - INFO - Setting DDIM alpha.
2023-11-25 18:39:52,656 - AnimateDiff - INFO - Injection finished.
2023-11-25 18:39:52,656 - AnimateDiff - INFO - AnimateDiff LoRA already hacked
2023-11-25 18:39:52,656 - AnimateDiff - INFO - CFGDenoiser already hacked
2023-11-25 18:39:52,657 - AnimateDiff - INFO - Hacking ControlNet.
2023-11-25 18:39:52,657 - AnimateDiff - INFO - BatchHijack already hacked.
2023-11-25 18:39:52,657 - AnimateDiff - INFO - ControlNet Main Entry already hacked.
2023-11-25 18:39:52,697 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-11-25 18:39:55,575 - ControlNet - INFO - Loading preprocessor: depth
2023-11-25 18:39:55,576 - ControlNet - INFO - preprocessor resolution = 448
2023-11-25 18:40:03,478 - ControlNet - INFO - ControlNet Hooked - Time = 10.787747859954834
*** Error completing request
*** Arguments: ('task(pud1kvh3lzj0ocs)', '1 sex girl, big breasts, solo, high heels, skirt, thigh strap, squatting, black footwear, long hair, closed eyes, multicolored hair, red hair, black shirt, sleeveless, black skirt, full body, shirt, lips, brown hair, black hair, sleeveless shirt, bare shoulders, crop top, midriff, grey background, simple background, ', 'bad hands, normal quality, ((monochrome)), ((grayscale)), ((strabismus)), ng_deepnegative_v1_75t, (bad-hands-5:1.3), (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad_prompt, badhandv4, EasyNegative, ', [], 20, 'Euler a', 1, 1, 7, 800, 448, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000024A8CCB7DC0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, <animatediff_utils.py.AnimateDiffProcess object at 0x0000024A8CC58940>, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000024AB1539DB0>, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [cfd03158]', weight=1, image={'image': array([[[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        ...,
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, '🔄', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 21, in process_images
        res = original_function(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 118, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 269, in mm_cfg_forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_utils.py", line 26, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_unet.py", line 48, in apply_model
        return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui
        raise e
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui
        return forward(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\cldm.py", line 300, in forward
        guided_hint = self.input_hint_block(hint, emb, context)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

---
2023-11-25 18:43:26,500 - AnimateDiff - INFO - AnimateDiff process start.
2023-11-25 18:43:26,501 - AnimateDiff - INFO - Motion module already injected. Trying to restore.
2023-11-25 18:43:26,501 - AnimateDiff - INFO - Restoring DDIM alpha.
2023-11-25 18:43:26,502 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2023-11-25 18:43:26,503 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-11-25 18:43:26,503 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet middle block.
2023-11-25 18:43:26,503 - AnimateDiff - INFO - Removal finished.
2023-11-25 18:43:26,519 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2023-11-25 18:43:26,519 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2023-11-25 18:43:26,519 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2023-11-25 18:43:26,520 - AnimateDiff - INFO - Setting DDIM alpha.
2023-11-25 18:43:26,522 - AnimateDiff - INFO - Injection finished.
2023-11-25 18:43:26,522 - AnimateDiff - INFO - AnimateDiff LoRA already hacked
2023-11-25 18:43:26,522 - AnimateDiff - INFO - CFGDenoiser already hacked
2023-11-25 18:43:26,522 - AnimateDiff - INFO - Hacking ControlNet.
2023-11-25 18:43:26,522 - AnimateDiff - INFO - BatchHijack already hacked.
2023-11-25 18:43:26,522 - AnimateDiff - INFO - ControlNet Main Entry already hacked.
2023-11-25 18:43:26,736 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-11-25 18:43:28,938 - ControlNet - INFO - Loading preprocessor: depth
2023-11-25 18:43:28,939 - ControlNet - INFO - preprocessor resolution = 448
2023-11-25 18:43:38,425 - ControlNet - INFO - ControlNet Hooked - Time = 11.88900351524353
*** Error completing request
*** Arguments: ('task(ugw097u8h16i04e)', '1 sex girl, big breasts, solo, high heels, skirt, thigh strap, squatting, black footwear, long hair, closed eyes, multicolored hair, red hair, black shirt, sleeveless, black skirt, full body, shirt, lips, brown hair, black hair, sleeveless shirt, bare shoulders, crop top, midriff, grey background, simple background, ', 'bad hands, normal quality, ((monochrome)), ((grayscale)), ((strabismus)), ng_deepnegative_v1_75t, (bad-hands-5:1.3), (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad_prompt, badhandv4, EasyNegative, ', [], 20, 'Euler a', 1, 1, 7, 800, 448, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000024A8CC5E950>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, <animatediff_utils.py.AnimateDiffProcess object at 0x0000024A8CC58940>, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000024A8CC59F60>, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [cfd03158]', weight=1, image={'image': array([[[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        [[183, 187, 189],
***         [183, 187, 189],
***         [183, 187, 189],
***         ...,
***         [185, 189, 191],
***         [185, 189, 191],
***         [185, 189, 191]],
*** 
***        ...,
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]],
*** 
***        [[223, 224, 227],
***         [223, 224, 227],
***         [223, 224, 227],
***         ...,
***         [227, 227, 227],
***         [227, 227, 227],
***         [227, 227, 227]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, '🔄', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, ['Face'], False, '{\n    "face_detector": "RetinaFace",\n    "rules": {\n        "then": {\n            "face_processor": "img2img",\n            "mask_generator": {\n                "name": "BiSeNet",\n                "params": {\n                    "fallback_ratio": 0.1\n                }\n            }\n        }\n    }\n}', 'None', 40, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 21, in process_images
        res = original_function(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 118, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 269, in mm_cfg_forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_utils.py", line 26, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\modules\sd_hijack_unet.py", line 48, in apply_model
        return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui
        raise e
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui
        return forward(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions\sd-webui-controlnet\scripts\cldm.py", line 300, in forward
        guided_hint = self.input_hint_block(hint, emb, context)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\sd-webui-aki\sd-webui-aki-v4\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "E:\sd-webui-aki\sd-webui-aki-v4\py310\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

---

Additional information

No response

@kkget kkget added the bug-report Report of a bug, yet to be confirmed label Nov 25, 2023
@read-0nly
Copy link
Contributor

I got the same error using "--use-cpu all", If you modify this segment of modules/devices.py

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

to just return CPU it properly forces CN onto the CPU as well. Which points to the issue being with it looking for a task tied to use_cpu and not finding it. Digging into CN's code, it calls get_device_for("controlnet"). So I reverted the code and tried something different.

I'm guessing you're using the same flag, just "--use-cpu all"?

I got it working with the original code by replacing that with "--use-cpu all controlnet". This would effectively make use_cpu an array holding "all" and "controlnet", allowing CN to find itself and get CPU.

Another fix for this might be

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu or "all" in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

which makes 'all' behave as implied by the term, forcing CPU on everything, this also applies to something called swinir (which from what I can tell was also missed by "all" previously). This would be a fix code-side, whereas adding controlnet to the argument list is user-side and easier to do.

read-0nly added a commit to read-0nly/stable-diffusion-webui that referenced this issue Nov 28, 2023
fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097
@read-0nly
Copy link
Contributor

Also I applaud the courage with which you just posted that prompt loll, I'd have at least generated a sfw prompt to attach before posting logs.

@kkget
Copy link
Author

kkget commented Nov 28, 2023

另外,我为您刚刚发布该提示的勇气鼓掌,哈哈,我至少会在发布日志之前生成一个 sfw 提示以附加。

If you can try using this prompt word, it is normal

@kkget
Copy link
Author

kkget commented Nov 28, 2023

我使用“--use-cpu all”得到了同样的错误,如果您修改这部分模块/设备.py

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

只是为了返回 CPU,它也会正确地将 CN 强制到 CPU 上。这表明问题在于它正在寻找与use_cpu相关的任务而没有找到它。深入研究 CN 的代码,它调用 get_device_for(“controlnet”)。所以我恢复了代码并尝试了一些不同的东西。

我猜你使用的是相同的标志,只是“--use-cpu all”?

我通过将其替换为“--use-cpu all controlnet”来使用它来使用原始代码。这将有效地使use_cpu成为一个包含“all”和“controlnet”的数组,允许CN找到自己并获取CPU。

另一个解决方法可能是

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu or "all" in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

这使得“all”的行为与术语所暗示的那样,迫使 CPU 在所有事情上,这也适用于称为 swinir 的东西(据我所知,以前“all”也遗漏了它)。这将是代码端的修复,而将 controlnet 添加到参数列表是用户端的,更容易做到。

Thank you for your answer. I'll give it a try

@read-0nly
Copy link
Contributor

我使用“--use-cpu all”得到了同样的错误,如果您修改这部分模块/设备.py

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

只是为了返回 CPU,它也会正确地将 CN 强制到 CPU 上。这表明问题在于它正在寻找与use_cpu相关的任务而没有找到它。深入研究 CN 的代码,它调用 get_device_for(“controlnet”)。所以我恢复了代码并尝试了一些不同的东西。
我猜你使用的是相同的标志,只是“--use-cpu all”?
我通过将其替换为“--use-cpu all controlnet”来使用它来使用原始代码。这将有效地使use_cpu成为一个包含“all”和“controlnet”的数组,允许CN找到自己并获取CPU。
另一个解决方法可能是

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu or "all" in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

这使得“all”的行为与术语所暗示的那样,迫使 CPU 在所有事情上,这也适用于称为 swinir 的东西(据我所知,以前“all”也遗漏了它)。这将是代码端的修复,而将 controlnet 添加到参数列表是用户端的,更容易做到。

Thank you for your answer. I'll give it a try

I forgot to mention you might need to add --no-half-controlnet - it's the controlnet version of --no-half that seems necessary to run on CPU

@kkget
Copy link
Author

kkget commented Nov 28, 2023

我使用“--use-cpu all”得到了同样的错误,如果您修改这部分模块/设备.py

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

只是为了返回 CPU,它也会正确地将 CN 强制到 CPU 上。这表明问题在于它正在寻找与use_cpu相关的任务而没有找到它。深入研究 CN 的代码,它调用 get_device_for(“controlnet”)。所以我恢复了代码并尝试了一些不同的东西。
我猜你使用的是相同的标志,只是“--use-cpu all”?
我通过将其替换为“--use-cpu all controlnet”来使用它来使用原始代码。这将有效地使use_cpu成为一个包含“all”和“controlnet”的数组,允许CN找到自己并获取CPU。
另一个解决方法可能是

def get_device_for(task):
    if task in shared.cmd_opts.use_cpu or "all" in shared.cmd_opts.use_cpu:
        return cpu

    return get_optimal_device()

这使得“all”的行为与术语所暗示的那样,迫使 CPU 在所有事情上,这也适用于称为 swinir 的东西(据我所知,以前“all”也遗漏了它)。这将是代码端的修复,而将 controlnet 添加到参数列表是用户端的,更容易做到。

Thank you for your answer. I'll give it a try

I forgot to mention you might need to add --no-half-controlnet - it's the controlnet version of --no-half that seems necessary to run on CPU

It is very likely that this is the reason. I tried and replied. Thank you very much

@catboxanon catboxanon added bug Report of a confirmed bug and removed bug-report Report of a bug, yet to be confirmed labels Nov 29, 2023
@wfjsw
Copy link
Contributor

wfjsw commented Dec 2, 2023

The issue also occurs with general NVIDIA GPUs so I think it may have nothing to do with use-cpu arg.

@read-0nly
Copy link
Contributor

The issue also occurs with general NVIDIA GPUs so I think it may have nothing to do with use-cpu arg.

Same error text, "but found at least two devices, cpu and cuda:0!", or are the two devices found different?

If it's the same text and you're not using use-cpu then something must be falling back on cpu for some reason for it to be listed in the error. What's your parameters and what's the top of the stack trace when it throws the error? Maybe you have an extension with fallback behavior? I have an nvidia gpu but only get this error when using use-cpu

@wfjsw
Copy link
Contributor

wfjsw commented Dec 3, 2023

The issue also occurs with general NVIDIA GPUs so I think it may have nothing to do with use-cpu arg.

Same error text, "but found at least two devices, cpu and cuda:0!", or are the two devices found different?

If it's the same text and you're not using use-cpu then something must be falling back on cpu for some reason for it to be listed in the error. What's your parameters and what's the top of the stack trace when it throws the error? Maybe you have an extension with fallback behavior? I have an nvidia gpu but only get this error when using use-cpu

Same error text and stack traces. It also went through sd-webui-controlnet and have AnimateDiff enabled.

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a0
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
current transparent-background 1.2.9
Launching Web UI with arguments: --medvram-sdxl --theme dark --xformers --opt-channelslast --api --autolaunch
=================================================================================
You are running xformers 0.0.19.
The program is tested to work with xformers 0.0.20.
To reinstall the desired version, run with commandline flag --reinstall-xformers.

Use --skip-version-check commandline argument to disable this check.
=================================================================================
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default

========================= a1111-sd-webui-lycoris =========================
Starting from stable-diffusion-webui version 1.5.0
a1111-sd-webui-lycoris extension is no longer needed

All its features have been integrated into the native LoRA extension
LyCORIS models can now be used as if there are regular LoRA models

This extension has been automatically deactivated
Please remove this extension
==========================================================================

Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 23.11.1, num models: 9
[AddNet] Updating model hashes...
[AddNet] Updating model hashes...
2023-12-03 03:02:20,707 - ControlNet - INFO - ControlNet v1.1.419
ControlNet preprocessor location: D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\annotator\downloads
2023-12-03 03:02:20,842 - ControlNet - INFO - ControlNet v1.1.419
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [d819c8be6b] from D:\sd-webui-aki\sd-webui-aki-v4.1\models\Stable-diffusion\majicmixRealistic_v4.safetensors
2023-12-03 03:02:22,270 - AnimateDiff - INFO - Injecting LCM to UI.
2023-12-03 03:02:22,872 - AnimateDiff - INFO - Hacking i2i-batch.
Running on local URL:  http://127.0.0.1:7860
Creating model from config: D:\sd-webui-aki\sd-webui-aki-v4.1\configs\v1-inference.yaml
Loading VAE weights specified in settings: D:\sd-webui-aki\sd-webui-aki-v4.1\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 3.5s (load weights from disk: 1.7s, create model: 0.3s, apply weights to model: 1.1s, load VAE: 0.1s, calculate empty prompt: 0.1s).

To create a public link, set `share=True` in `launch()`.
Startup time: 25.3s (prepare environment: 9.2s, import torch: 3.8s, import gradio: 1.2s, setup paths: 0.7s, initialize shared: 0.4s, other imports: 0.6s, setup codeformer: 0.1s, load scripts: 4.9s, create ui: 1.4s, gradio launch: 2.4s, app_started_callback: 0.5s).
2023-12-03 03:03:09,599 - ControlNet - INFO - Preview Resolution = 512
2023-12-03 03:03:19,571 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [cfd03158]
2023-12-03 03:03:20,009 - ControlNet - INFO - Loaded state_dict from [D:\sd-webui-aki\sd-webui-aki-v4.1\models\ControlNet\control_v11f1p_sd15_depth.pth]
2023-12-03 03:03:20,009 - ControlNet - INFO - controlnet_default_config
2023-12-03 03:03:22,302 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
2023-12-03 03:03:22,373 - ControlNet - INFO - Loading preprocessor: depth
2023-12-03 03:03:22,374 - ControlNet - INFO - preprocessor resolution = 512
2023-12-03 03:03:22,923 - ControlNet - INFO - ControlNet Hooked - Time = 3.8737494945526123

0: 640x448 1 face, 67.8ms
Speed: 2.0ms preprocess, 67.8ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
2023-12-03 03:03:36,832 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-12-03 03:03:36,833 - ControlNet - INFO - Loading preprocessor: depth
2023-12-03 03:03:36,833 - ControlNet - INFO - preprocessor resolution = 512
2023-12-03 03:03:36,910 - ControlNet - INFO - ControlNet Hooked - Time = 0.2124333381652832
2023-12-03 03:03:42,439 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-12-03 03:03:42,441 - ControlNet - INFO - Loading preprocessor: depth
2023-12-03 03:03:42,441 - ControlNet - INFO - preprocessor resolution = 512
2023-12-03 03:03:42,486 - ControlNet - INFO - ControlNet Hooked - Time = 0.05086398124694824

0: 640x448 1 face, 66.8ms
Speed: 2.0ms preprocess, 66.8ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
2023-12-03 03:03:55,246 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-12-03 03:03:55,248 - ControlNet - INFO - Loading preprocessor: depth
2023-12-03 03:03:55,248 - ControlNet - INFO - preprocessor resolution = 512
2023-12-03 03:03:55,293 - ControlNet - INFO - ControlNet Hooked - Time = 0.04989433288574219
2023-12-03 03:06:03,301 - AnimateDiff - INFO - AnimateDiff process start.
2023-12-03 03:06:03,301 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-animatediff\model\mm_sd_v15_v2.ckpt
2023-12-03 03:06:03,907 - AnimateDiff - INFO - Guessed mm_sd_v15_v2.ckpt architecture: MotionModuleType.AnimateDiffV2
2023-12-03 03:06:06,584 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-12-03 03:06:07,605 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block.
2023-12-03 03:06:07,605 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks.
2023-12-03 03:06:07,606 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks.
2023-12-03 03:06:07,606 - AnimateDiff - INFO - Setting DDIM alpha.
2023-12-03 03:06:07,610 - AnimateDiff - INFO - Injection finished.
2023-12-03 03:06:07,610 - AnimateDiff - INFO - Hacking loral to support motion lora
2023-12-03 03:06:07,610 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
2023-12-03 03:06:07,610 - AnimateDiff - INFO - Hacking ControlNet.
2023-12-03 03:06:07,621 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-12-03 03:06:07,622 - ControlNet - INFO - Loading preprocessor: depth
2023-12-03 03:06:07,623 - ControlNet - INFO - preprocessor resolution = 512
2023-12-03 03:06:07,701 - ControlNet - INFO - ControlNet Hooked - Time = 0.08377456665039062
*** Error completing request
*** Arguments: ('task(gj8o5m70yq4yyhy)', '(sky1.6),(field:1.6),(grass:1.6),samira \\(league of legends\\),league of legends,1girl,jewelry,tattoo,eyepatch,earrings,green eyes,braid,long hair,dark skin,gloves,armor,navel,bracelet,lips,hair over shoulder,smile,looking at viewer,arm tattoo,mole,mole above mouth,<lora:沙弥啦:1>,best quality,masterpiece,illustration,an extremely delicate and beautiful,extremely detailed,CG,unity,8k wallpaper,Amazing,finely detail,masterpiece,best quality,official art,extremely detailed CG unity 8k wallpaper,absurdres,incredibly absurdres,huge filesize,ultra-detailed,highres,extremely detailed,beautiful detailed girl,extremely detailed eyes and face,beautiful detailed eyes,light on face,(pureerosface_v1:0.5),(ulzzang-6500-v1.1:0.5),', 'NSFW,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,((monochrome)),((grayscale)),skin spots,acnes,skin blemishes,bad anatomy,(long hair:1.4),DeepNegative,(fat:1.2),facing away,looking away,tilted head,{Multiple people},lowres,bad anatomy,bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,worstquality,low quality,normal quality,jpegartifacts,signature,watermark,username,blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit,extra arms,extra leg,extra foot,(bad-hands-5:0.8),(bad_prompt_version2:0.8),', [], 20, 'DPM++ 2M alt Karras', 1, 1, 7, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000250F145AC80>, 0, False, '', 0.8, 3456605177, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '<lora:韩国脸:1>,', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000250F1704FA0>, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [cfd03158]', weight=1, image={'image': array([[[145, 125, 118],
***         [145, 125, 118],
***         [145, 125, 118],
***         ...,
***         [199, 177, 169],
***         [199, 177, 169],
***         [199, 177, 169]],
*** 
***        [[145, 125, 118],
***         [145, 125, 118],
***         [145, 125, 118],
***         ...,
***         [199, 177, 169],
***         [199, 177, 169],
***         [199, 177, 169]],
*** 
***        [[145, 125, 118],
***         [145, 125, 118],
***         [145, 125, 118],
***         ...,
***         [199, 177, 169],
***         [199, 177, 169],
***         [199, 177, 169]],
*** 
***        ...,
*** 
***        [[102,  80,  74],
***         [102,  80,  74],
***         [102,  79,  75],
***         ...,
***         [122,  97,  93],
***         [122,  97,  93],
***         [122,  97,  93]],
*** 
***        [[104,  80,  74],
***         [104,  80,  74],
***         [104,  80,  75],
***         ...,
***         [122,  99,  92],
***         [122,  99,  92],
***         [122,  99,  92]],
*** 
***        [[103,  79,  67],
***         [103,  79,  67],
***         [103,  79,  67],
***         ...,
***         [122,  97,  94],
***         [122,  97,  94],
***         [122,  97,  94]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, '🔄', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 'None', '', '', 1, 'FirstGen', False, False, 'Current', False,   1 2 3
*** 0      , False, '', False, 1, False, False, 30, '', False, False, False, '', '', '', '', '', None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False) {}
    Traceback (most recent call last):
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 119, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_samplers_kdiffusion.py", line 267, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_samplers_kdiffusion.py", line 267, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_samplers_kdiffusion.py", line 59, in sample_dpmpp_2m_alt
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 269, in mm_cfg_forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui
        raise e
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui
        return forward(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions\sd-webui-controlnet\scripts\cldm.py", line 300, in forward
        guided_hint = self.input_hint_block(hint, emb, context)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "D:\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)

---

@read-0nly
Copy link
Contributor

read-0nly commented Dec 3, 2023

Interesting... The error definitely points to something (either SD itself, controlnet, or AnimateDiff) is being shunted to the CPU. I tried your arguments, and don't get the same error either nor am I seeing that shunting happening. I don't have AnimateDiff, but I don't see why it would be shunting to CPU when your arguments don't cause that for either SD or CN

sd-webui-aki, where did that come from? This repro is stable-diffusion-webui, I'm wondering if there's code differences between the two, I'd be willing to install sd-webui-aki to troubleshoot it and figure out what's going on.

Are you comfortable with python? If you can track down that devices.py file in your setup, you could print() the task when devices are fetched to see who's getting shunted. If you can get me a copy of just that devices.py file i could send you back a debug version to probe this a bit (it'd return info on who is requesting a device and what they get to the output).

@read-0nly
Copy link
Contributor

I'd be curious if you get the issue when not using AnimateDiff? Like, if you just use Controlnet and StableDiffusion. From what I'm understanding of the logs it looks like controlnet does it's thing quite happily and the crash is happening when AnimateDiff tries to grab hold of it - if AnimateDiff alone is having the issue then I'll take a look at that.

@wfjsw
Copy link
Contributor

wfjsw commented Dec 3, 2023

sd-webui-aki, where did that come from? This repro is stable-diffusion-webui, I'm wondering if there's code differences between the two, I'd be willing to install sd-webui-aki to troubleshoot it and figure out what's going on.

It's literally stable-diffusion-webui packed with python, torch, etc. Codes are the same.

I am not experiencing that either, or I'd debug it on my own. It is some random guys on forum that had this.

I'd be curious if you get the issue when not using AnimateDiff? Like, if you just use Controlnet and StableDiffusion. From what I'm understanding of the logs it looks like controlnet does it's thing quite happily and the crash is happening when AnimateDiff tries to grab hold of it - if AnimateDiff alone is having the issue then I'll take a look at that.

I went through my error reporting warehouse and it seems the error only happens when both Controlnet and Animatediff are involved. I fail to find any instance where only Controlnet or Animatediff exists. Also the number of such events and users are pretty low (somewhere around 30-40 in 14 days so it is a pretty rare one).

FYI on one of the instances these extensions are installed:

extensions
A TemporalKit @ main/f46d255c
A a1111-sd-webui-tagcomplete @ main/fcacf7dd
A adetailer @ main/261f9c1e
A deforum-for-automatic1111-webui @ automatic1111-webui/d3b00b3c
A ebsynth_utility @ main/8ff9fbf2
A multidiffusion-upscaler-for-automatic1111 @ main/fbb24736
A sd-webui-IS-NET-pro @ main/1ac270e4
A sd-webui-additional-networks @ main/e9f3d622
A sd-webui-animatediff @ master/d2e77f40
A sd-webui-controlnet @ main/10bd9b25
A sd-webui-infinite-image-browsing @ main/859b28e4
A sd-webui-lora-block-weight @ main/df10cb42
A sd-webui-model-converter @ main/5007e6f3
A sd-webui-openpose-editor @ main/a3d13af7
A sd-webui-prompt-all-in-one @ main/07b3515e
A sd-webui-segment-anything @ master/d80220ec
A sd-webui-supermerger @ main/ab9d9ccb
A stable-diffusion-webui-localization-zh_Hans @ master/ffb06b0d
A stable-diffusion-webui-model-toolkit @ master/cf824587
A stable-diffusion-webui-wd14-tagger @ master/3fb06011
A ultimate-upscale-for-automatic1111 @ master/728ffcec
A LDSR @ None/
A Lora @ None/
A ScuNET @ None/
A SwinIR @ None/
A canvas-zoom-and-pan @ None/
A extra-options-section @ None/
A mobile @ None/
A prompt-bracket-checker @ None/
pip packages
name version
absl-py 1.4.0
accelerate 0.21.0
addict 2.4.0
aenum 3.1.12
aiofiles 23.1.0
aiohttp 3.8.4
aiosignal 1.3.1
aliyun-python-sdk-alimt 3.2.0
aliyun-python-sdk-core 2.13.10
altair 4.2.2
antlr4-python3-runtime 4.9.3
anyio 3.6.2
asttokens 2.4.1
async-timeout 4.0.2
attrs 23.1.0
backcall 0.2.0
basicsr 1.4.2
beautifulsoup4 4.12.2
blendmodes 2022
boltons 23.0.0
boto3 1.26.155
botocore 1.29.155
cachetools 5.3.0
certifi 2023.5.7
cffi 1.15.1
chardet 4.0.0
charset-normalizer 3.1.0
clean-fid 0.1.35
click 8.1.3
clip 1.0
color-matcher 0.5.0
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.0.7
cryptography 41.0.1
cssselect2 0.7.0
cycler 0.11.0
ddt 1.5.0
decorator 4.0.11
deprecation 2.1.0
diffusers 0.17.1
dill 0.3.6
docutils 0.20.1
easydict 1.11
einops 0.4.1
entrypoints 0.4
exceptiongroup 1.1.3
executing 2.0.1
facexlib 0.3.0
fastapi 0.94.0
ffmpeg-python 0.2.0
ffmpy 0.3.0
filelock 3.12.0
filterpy 1.4.5
flatbuffers 23.3.3
font-roboto 0.0.1
fonts 0.0.3
fonttools 4.39.3
freetype-py 2.3.0
frozenlist 1.3.3
fsspec 2023.4.0
ftfy 6.1.1
future 0.18.3
fvcore 0.1.5.post20221221
gdown 4.7.1
gfpgan 1.3.8
gitdb 4.0.10
gitpython 3.1.32
google-auth 2.17.3
google-auth-oauthlib 1.0.0
gradio 3.41.2
gradio_client 0.5.0
grpcio 1.54.0
h11 0.12.0
httpcore 0.15.0
httpx 0.24.1
huggingface-hub 0.14.1
humanfriendly 10.0
idna 2.10
imageio 2.33.0
imageio-ffmpeg 0.4.9
importlib-metadata 6.6.0
importlib-resources 6.1.1
inflection 0.5.1
iopath 0.1.9
ipython 8.16.1
jedi 0.19.1
jinja2 3.1.2
jmespath 0.10.0
joblib 1.2.0
jsonmerge 1.8.0
jsonschema 4.17.3
kiwisolver 1.4.4
kornia 0.6.7
lark 1.1.2
lazy_loader 0.2
lightning-utilities 0.8.0
linkify-it-py 2.0.2
llvmlite 0.40.0
lmdb 1.4.1
lpips 0.1.4
lxml 4.9.2
markdown 3.4.3
markdown-it-py 2.2.0
markupsafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.3
mdurl 0.1.2
mediapipe 0.10.8
moviepy 0.2.3.2
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.14
mypy-extensions 1.0.0
networkx 3.1
numba 0.57.0
numpy 1.23.5
oauthlib 3.2.2
omegaconf 2.2.3
onnxruntime-gpu 1.14.1
open-clip-torch 2.20.0
openai 0.27.8
opencv-contrib-python 4.7.0.72
opencv-python 4.8.1.78
orjson 3.8.11
packaging 23.1
pandas 2.0.1
parso 0.8.3
pathos 0.3.0
pickleshare 0.7.5
piexif 1.1.3
pillow 9.5.0
pip 23.1.2
platformdirs 4.0.0
portalocker 2.7.0
pox 0.3.2
ppft 1.7.6.6
prompt-toolkit 3.0.39
protobuf 3.20.0
psutil 5.9.5
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyasn1 0.5.0
pyasn1-modules 0.3.0
pyav 11.4.1
pycairo 1.23.0
pycparser 2.21
pydantic 1.10.7
pydub 0.25.1
pyexecjs 1.5.1
pyfunctional 1.4.3
pygments 2.15.1
pyparsing 3.0.9
pyre-extensions 0.0.29
pyreadline3 3.4.1
pyrsistent 0.19.3
pysocks 1.7.1
python-dateutil 2.8.2
python-dotenv 1.0.0
python-multipart 0.0.6
pytorch-lightning 1.9.4
pytz 2023.3
pywavelets 1.4.1
pywin32 306
pyyaml 6.0
realesrgan 0.3.0
regex 2023.5.5
reportlab 4.0.0
requests 2.25.1
requests-oauthlib 1.3.1
resize-right 0.0.2
rich 13.6.0
rlpycairo 0.2.0
rsa 4.9
s3transfer 0.6.1
safetensors 0.3.1
scenedetect 0.6.2
scikit-image 0.21.0
scikit-learn 1.2.2
scipy 1.10.1
seaborn 0.12.2
semantic-version 2.10.0
send2trash 1.8.2
sentencepiece 0.1.99
setuptools 65.5.0
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
sounddevice 0.4.6
soupsieve 2.4.1
stack-data 0.6.3
starlette 0.26.1
svglib 1.5.1
sympy 1.11.1
tabulate 0.9.0
tb-nightly 2.14.0a20230507
tensorboard-data-server 0.7.0
termcolor 2.3.0
thop 0.1.1-2209072238
threadpoolctl 3.1.0
tifffile 2023.4.12
timm 0.9.2
tinycss2 1.2.1
tokenizers 0.13.3
tomesd 0.1.3
tomli 2.0.1
toolz 0.12.0
torch 2.0.0+cu118
torchdiffeq 0.2.3
torchmetrics 0.11.4
torchsde 0.2.5
torchvision 0.15.1+cu118
tqdm 4.66.1
traitlets 5.12.0
trampoline 0.1.2
transformers 4.30.2
translators 5.7.6
transparent-background 1.2.9
typing-inspect 0.8.0
typing_extensions 4.5.0
tzdata 2023.3
uc-micro-py 1.0.2
ultralytics 8.0.220
urllib3 1.26.15
uvicorn 0.22.0
wcwidth 0.2.6
webencodings 0.5.1
websockets 11.0.3
werkzeug 2.3.3
wheel 0.40.0
xformers 0.0.19
yacs 0.1.8
yapf 0.33.0
yarl 1.9.2
zipp 3.15.0

@kkget
Copy link
Author

kkget commented Dec 3, 2023

我很好奇您在不使用 AnimateDiff 时是否遇到问题?比如,如果你只使用 Controlnet 和 StableDiffusion。根据我对日志的理解,看起来 controlnet 做得非常高兴,当 AnimateDiff 试图抓住它时,崩溃正在发生 - 如果只有 AnimateDiff 有问题,那么我会看看。

There are no errors when using Controlnet alone or Animatediff alone, and there are issues with the combination of the two

@read-0nly
Copy link
Contributor

Thanks for the answers - i'll install and test animatediff tomorrow to see if I can repro the issue.

@read-0nly
Copy link
Contributor

So I found something relevant and perhaps worth testing - but I can't get it working myself (I'm running this raw in windows, not using a docker instance). When I just use --xformers it falls back to doggettx, if I force xformers i get an error that xformers fails to load because it's missing triton and triton doesn't exist on windows. I can import xformers in an interactive python shell, but not xformers.ops (this throws the triton error). It then hard-crashes on generation. That said, this might be worth testing on your end, from animatediff's docs :

Attention

Adding --xformers / --opt-sdp-attention to your command lines can significantly reduce VRAM and improve speed. However, due to a bug in xformers, you may or may not get CUDA error. If you get CUDA error, please either completely switch to --opt-sdp-attention, or preserve --xformers -> go to Settings/AnimateDiff -> choose "Optimize attention layers with sdp (torch >= 2.0.0 required)".

kkget were you using xformers too? If you don't use xformers or use opt-sdp-attention instead of xformers does it affect the error at all? It might simply be the issue listed in animatediff's repo.

That said, this might be better approached from AnimateDiff's repo, on top of this one. It seems to be an issue with AnimateDiff itself from what you guys are saying. I did find a few places where CPU-shunting could be happening, gonna test that then if it still fails I'll bite the bullet and try to get this set up in linux so I can test xformers myself with triton installed.

@wfjsw
Copy link
Contributor

wfjsw commented Dec 3, 2023

Triton is not involved in any way here so it is fine to leave that alone.

xformers can work raw in Windows just make sure the version matches with the torch.

I found out that both SDP and xformers are having this in a similar fashion so it might be irrelevant.

@wfjsw
Copy link
Contributor

wfjsw commented Dec 3, 2023

continue-revolution/sd-webui-animatediff#302 (comment)

Fair enough. This is a known issue when Pad prompt/negative prompt to be same length ia not enabled.

@kkget
Copy link
Author

kkget commented Dec 4, 2023

Triton is not involved in any way here so it is fine to leave that alone.

xformers can work raw in Windows just make sure the version matches with the torch.

I found out that both SDP and xformers are having this in a similar fashion so it might be irrelevant.

but I use Animatediff is not have the error

So I found something relevant and perhaps worth testing - but I can't get it working myself (I'm running this raw in windows, not using a docker instance). When I just use --xformers it falls back to doggettx, if I force xformers i get an error that xformers fails to load because it's missing triton and triton doesn't exist on windows. I can import xformers in an interactive python shell, but not xformers.ops (this throws the triton error). It then hard-crashes on generation. That said, this might be worth testing on your end, from animatediff's docs :

Attention

Adding --xformers / --opt-sdp-attention to your command lines can significantly reduce VRAM and improve speed. However, due to a bug in xformers, you may or may not get CUDA error. If you get CUDA error, please either completely switch to --opt-sdp-attention, or preserve --xformers -> go to Settings/AnimateDiff -> choose "Optimize attention layers with sdp (torch >= 2.0.0 required)".

kkget were you using xformers too? If you don't use xformers or use opt-sdp-attention instead of xformers does it affect the error at all? It might simply be the issue listed in animatediff's repo.

That said, this might be better approached from AnimateDiff's repo, on top of this one. It seems to be an issue with AnimateDiff itself from what you guys are saying. I did find a few places where CPU-shunting could be happening, gonna test that then if it still fails I'll bite the bullet and try to get this set up in linux so I can test xformers myself with triton installed.
Yes, I used xfomers in the settings, but my Torch version is Torch: 2.0.0+cu118

@kkget
Copy link
Author

kkget commented Dec 4, 2023

Thank you very much,I tried this setting and it can work now

@catboxanon catboxanon added bug-report Report of a bug, yet to be confirmed and removed bug Report of a confirmed bug labels Dec 4, 2023
martianunlimited added a commit to martianunlimited/stable-diffusion-webui-ux that referenced this issue Jan 25, 2024
* added option to play notification sound or not

* Convert (emphasis) to (emphasis:1.1)

per @SirVeggie's suggestion

* Make attention conversion optional

Fix square brackets multiplier

* put notification.mp3 option at the end of the page

* more general case of adding an infotext when no images have been generated

* use shallow copy for AUTOMATIC1111#13535

* remove duplicated code

* support webui.settings.bat

* Start / Restart generation by Ctrl (Alt) + Enter

Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter

* add an option to not print stack traces on ctrl+c.

* repair unload sd checkpoint button

* respect keyedit_precision_attention setting when converting from old (((attention))) syntax

* Update script.js

Exclude lambda

* Update script.js

LF instead CRLF

* Update script.js

* Add files via upload

LF

* wip incorrect OFT implementation

* inference working but SLOW

* faster by using cached R in forward

* faster by calculating R in updown and using cached R in forward

* refactor: fix constraint, re-use get_weight

* style: formatting

* style: fix ambiguous variable name

* rework some of changes for emphasis editing keys, force conversion of old-style emphasis

* fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)

* fix bug when using --gfpgan-models-path

* fix Blank line contains whitespace

* refactor: use forward hook instead of custom forward

* fix: return orig weights during updown, merge weights before forward

* fix: support multiplier, no forward pass hook

* style: cleanup oft

* fix: use merge_weight to cache value

* refactor: remove used OFT functions

* fix: multiplier applied twice in finalize_updown

* style: conform style

* Update prompts_from_file script to allow concatenating entries with the general prompt.

* linting issue

* call state.jobnext() before postproces*()

* Fix AUTOMATIC1111#13796

Fix comment error that makes understanding scheduling more confusing.

* test implementation based on kohaku diag-oft implementation

* detect diag_oft type

* no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch

* added accordion settings options

* Fix parenthesis auto selection

Fixes AUTOMATIC1111#13813

* Update requirements_versions.txt

* skip multihead attn for now

* refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb

* refactor: use same updown for both kohya OFT and LyCORIS diag-oft

* refactor: remove unused function

* correct a typo

modify "defaul" to "default"

* add a visible checkbox to input accordion

* eslint

* properly apply sort order for extra network cards when selected from dropdown
allow selection of default sort order in settings
remove 'Default' sort order, replace with 'Name'

* Add SSD-1B as a supported model

* Added memory clearance after deletion

* Use devices.torch_gc() instead of empty_cache()

* added compact prompt option

* compact prompt option disabled by default

* linter

* more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry

* fix img2img_tabs error

* fix exception related to the pix2pix

* Add option to set notification sound volume

* fix pix2pix producing bad results

* moved nested with to single line to remove extra tabs

* removed changes that weren't merged properly

* multiline with statement for readibility

* Update README.md

Modify the stablediffusion dependency address

* Update README.md

Modify the stablediffusion dependency address

* - opensuse compatibility

* Enable prompt hotkeys in style editor

* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+

* fix added accordion settings options

* ExitStack as alternative to suppress

* implementing script metadata and DAG sorting mechanism

* populate loaded_extensions from extension list instead

* reverse the extension load order so builtin extensions load earlier natively

* add hyperTile

https://github.com/tfernd/HyperTile

* remove the assumption of same name

* allow comma and whitespace as separator

* fix

* bug fix

* dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir

* Lint

* Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed

* Adds 'Path' sorting for Extra network cards

* fix gradio video component and canvas fit for inpaint

* hotfix: call shared.state.end() after postprocessing done

* Implement Hypertile

Co-Authored-By: Kieran Hunt <[email protected]>

* copy LDM VAE key from XL

* fix: ignore calc_scale() for COFT which has very small alpha

* feat: LyCORIS/kohya OFT network support

* convert/add hypertile options

* fix ruff - add newline

* Adds tqdm handler to logging_config.py for progress bar integration

* Take into account tqdm not being installed before first boot for logging

* actually adds handler to logging_config.py

* Fix critical issue - unet apply

* Fix inverted option issue

I'm pretty sure I was sleepy while implementing this

* set empty value for SD XL 3rd layer

* fix double gc and decoding with unet context

* feat: fix randn found element of type float at pos 2

Signed-off-by: storyicon <[email protected]>

* use metadata.ini for meta filename

* Option to show batch img2img results in UI

shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown

* save sysinfo as .json

GitHub now allows uploading of .json files in issues

* rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code

* added option for default behavior of dir buttons

* Add FP32 fallback support on sd_vae_approx

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the submodule may require additional modifications. The following is the example modification on the other submodule.

```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py

class Upsample(nn.Module):
..snip..
    def forward(self, x):
        assert x.shape[1] == self.channels
        if self.dims == 3:
            x = F.interpolate(
                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
            )
        else:
            try:
                x = F.interpolate(x, scale_factor=2, mode="nearest")
            except:
                x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
        if self.use_conv:
            x = self.conv(x)
        return x
..snip..
```

You can see the FP32 fallback execution as same as sd_vae_approx.py.

* fix  [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047

* Update ruff to 0.1.6

* Simplify restart_sampler (suggested by ruff)

* use extension name for determining an extension is installed in the index

* Move exception_records related methods to errors.py

* remove traceback in sysinfo

* move file

* rework hypertile into a built-in extension

* do not save HTML explanations from options page to config

* fix linter errors

* compact prompt layout: preserve scroll when switching between lora tabs

* json.dump(ensure_ascii=False)

improve json readability

* add categories to settings

* also consider extension url

* add Block component creation callback

* catch uncaught exception with ui creation scripts

prevent total webui crash

* Allow use of mutiple styles csv files

* bugfix for warning message (#6)

* bugfix for warning message (#6)

* bugfix for warning message

* bugfix error message

* Allow use of mutiple styles csv files
* AUTOMATIC1111#14122
Fix edge case where style text has multiple {prompt} placeholders
* AUTOMATIC1111#14005

* Support XYZ scripts / split hires path from unet

* cache divisors / fix ruff

* fix ruff in hypertile_xyz.py

* fix ruff - set comprehension

* hypertile_xyz: we don't need isnumeric check for AxisOption

* Update devices.py

fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097

* fix Auto focal point crop for opencv >= 4.8.x

autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception

* reformat file with uniform indentation

* Revert "Add FP32 fallback support on sd_vae_approx"

This reverts commit 58c1954.
Since the modification is expected to move to mac_specific.py
(AUTOMATIC1111#14046 (comment))

* Add FP32 fallback support on torch.nn.functional.interpolate

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```

* Fix the Ruff error about unused import

* Initial IPEX support

* add max-heigh/width to global-popup-inner

prevent the pop-up from being too big as to making exiting the pop-up impossible

* Close popups with escape key

* Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone

* Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load

* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page

* Disable ipex autocast due to its bad perf

* split UI settings page into many

* put code that can cause an exception into its own function for AUTOMATIC1111#14120

* Fix fp64

* extras tab batch: actually use original filename
preprocessing upscale: do not do an extra upscale step if it's not needed

* Remove webui-ipex-user.bat

* remove Train/Preprocessing tab and put all its functionality into extras batch images mode

* potential fix for AUTOMATIC1111#14172

* alternate implementation for unet forward replacement that does not depend on hijack being applied

* Fix `save_samples` being checked early when saving masked composite

* Re-add setting lost as part of e294e46

* rework mask and mask_composite logic

* Add import_hook hack to work around basicsr incompatibility

Fixes AUTOMATIC1111#13985

* Update launch_utils.py to fix wrong dep. checks and reinstalls

Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime.

In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example:

> launch_utils.run_pip("install ffmpeg-python", "required package")
[ Installing required package: "ffmpeg-python" ... ]
[ Installed ]

> launch_utils.is_installed("ffmpeg-python")
False

... which would actually return true with:

> launch_utils.is_installed("ffmpeg")
True

* Lint

* make webui not crash when running with --disable-all-extensions option

* update changelog

* repair old handler for postprocessing API

* repair old handler for postprocessing API in a way that doesn't break interface

* add hypertile infotext

* Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text()

remove clean_text()

* fix Inpaint Image Appears Behind Some UI Elements anapnoe#206

* fix side panel show/hide button hot zone does not use the entire width anapnoe#204

* Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes

Fix wrong implementation in network_oft

* Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution

Allow pasting in WIDTHxHEIGHT strings into the width/height fields

* Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id

Assign id for "extra_options". Replace numeric field with slider.

* Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles

Fix styles

* Merge pull request AUTOMATIC1111#14266 from kaalibro/dev

Re-add setting lost as part of e294e46

* Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding

[IPEX] Fix embedding and ControlNet

* Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer

add option: Live preview in full page image viewer

* Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison

change state dict comparison to ref compare

* Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev

AUTOMATIC1111#13354 : solve lora loading issue

* Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox

default False js_live_preview_in_modal_lightbox

* update to 1.7 from upstream

* Update README.md

* Update screenshot.png

* Update CITATION.cff

* update to latest version

* update to latest version

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: Ritesh Gangnani <riteshgangnani10>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: anapnoe <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>
martianunlimited added a commit to martianunlimited/stable-diffusion-webui-ux that referenced this issue Jan 25, 2024
* added option to play notification sound or not

* Convert (emphasis) to (emphasis:1.1)

per @SirVeggie's suggestion

* Make attention conversion optional

Fix square brackets multiplier

* put notification.mp3 option at the end of the page

* more general case of adding an infotext when no images have been generated

* use shallow copy for AUTOMATIC1111#13535

* remove duplicated code

* support webui.settings.bat

* Start / Restart generation by Ctrl (Alt) + Enter

Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter

* add an option to not print stack traces on ctrl+c.

* repair unload sd checkpoint button

* respect keyedit_precision_attention setting when converting from old (((attention))) syntax

* Update script.js

Exclude lambda

* Update script.js

LF instead CRLF

* Update script.js

* Add files via upload

LF

* wip incorrect OFT implementation

* inference working but SLOW

* faster by using cached R in forward

* faster by calculating R in updown and using cached R in forward

* refactor: fix constraint, re-use get_weight

* style: formatting

* style: fix ambiguous variable name

* rework some of changes for emphasis editing keys, force conversion of old-style emphasis

* fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)

* fix bug when using --gfpgan-models-path

* fix Blank line contains whitespace

* refactor: use forward hook instead of custom forward

* fix: return orig weights during updown, merge weights before forward

* fix: support multiplier, no forward pass hook

* style: cleanup oft

* fix: use merge_weight to cache value

* refactor: remove used OFT functions

* fix: multiplier applied twice in finalize_updown

* style: conform style

* Update prompts_from_file script to allow concatenating entries with the general prompt.

* linting issue

* call state.jobnext() before postproces*()

* Fix AUTOMATIC1111#13796

Fix comment error that makes understanding scheduling more confusing.

* test implementation based on kohaku diag-oft implementation

* detect diag_oft type

* no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch

* added accordion settings options

* Fix parenthesis auto selection

Fixes AUTOMATIC1111#13813

* Update requirements_versions.txt

* skip multihead attn for now

* refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb

* refactor: use same updown for both kohya OFT and LyCORIS diag-oft

* refactor: remove unused function

* correct a typo

modify "defaul" to "default"

* add a visible checkbox to input accordion

* eslint

* properly apply sort order for extra network cards when selected from dropdown
allow selection of default sort order in settings
remove 'Default' sort order, replace with 'Name'

* Add SSD-1B as a supported model

* Added memory clearance after deletion

* Use devices.torch_gc() instead of empty_cache()

* added compact prompt option

* compact prompt option disabled by default

* linter

* more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry

* fix img2img_tabs error

* fix exception related to the pix2pix

* Add option to set notification sound volume

* fix pix2pix producing bad results

* moved nested with to single line to remove extra tabs

* removed changes that weren't merged properly

* multiline with statement for readibility

* Update README.md

Modify the stablediffusion dependency address

* Update README.md

Modify the stablediffusion dependency address

* - opensuse compatibility

* Enable prompt hotkeys in style editor

* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+

* fix added accordion settings options

* ExitStack as alternative to suppress

* implementing script metadata and DAG sorting mechanism

* populate loaded_extensions from extension list instead

* reverse the extension load order so builtin extensions load earlier natively

* add hyperTile

https://github.com/tfernd/HyperTile

* remove the assumption of same name

* allow comma and whitespace as separator

* fix

* bug fix

* dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir

* Lint

* Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed

* Adds 'Path' sorting for Extra network cards

* fix gradio video component and canvas fit for inpaint

* hotfix: call shared.state.end() after postprocessing done

* Implement Hypertile

Co-Authored-By: Kieran Hunt <[email protected]>

* copy LDM VAE key from XL

* fix: ignore calc_scale() for COFT which has very small alpha

* feat: LyCORIS/kohya OFT network support

* convert/add hypertile options

* fix ruff - add newline

* Adds tqdm handler to logging_config.py for progress bar integration

* Take into account tqdm not being installed before first boot for logging

* actually adds handler to logging_config.py

* Fix critical issue - unet apply

* Fix inverted option issue

I'm pretty sure I was sleepy while implementing this

* set empty value for SD XL 3rd layer

* fix double gc and decoding with unet context

* feat: fix randn found element of type float at pos 2

Signed-off-by: storyicon <[email protected]>

* use metadata.ini for meta filename

* Option to show batch img2img results in UI

shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown

* save sysinfo as .json

GitHub now allows uploading of .json files in issues

* rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code

* added option for default behavior of dir buttons

* Add FP32 fallback support on sd_vae_approx

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the submodule may require additional modifications. The following is the example modification on the other submodule.

```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py

class Upsample(nn.Module):
..snip..
    def forward(self, x):
        assert x.shape[1] == self.channels
        if self.dims == 3:
            x = F.interpolate(
                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
            )
        else:
            try:
                x = F.interpolate(x, scale_factor=2, mode="nearest")
            except:
                x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
        if self.use_conv:
            x = self.conv(x)
        return x
..snip..
```

You can see the FP32 fallback execution as same as sd_vae_approx.py.

* fix  [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047

* Update ruff to 0.1.6

* Simplify restart_sampler (suggested by ruff)

* use extension name for determining an extension is installed in the index

* Move exception_records related methods to errors.py

* remove traceback in sysinfo

* move file

* rework hypertile into a built-in extension

* do not save HTML explanations from options page to config

* fix linter errors

* compact prompt layout: preserve scroll when switching between lora tabs

* json.dump(ensure_ascii=False)

improve json readability

* add categories to settings

* also consider extension url

* add Block component creation callback

* catch uncaught exception with ui creation scripts

prevent total webui crash

* Allow use of mutiple styles csv files

* bugfix for warning message (#6)

* bugfix for warning message (#6)

* bugfix for warning message

* bugfix error message

* Allow use of mutiple styles csv files
* AUTOMATIC1111#14122
Fix edge case where style text has multiple {prompt} placeholders
* AUTOMATIC1111#14005

* Support XYZ scripts / split hires path from unet

* cache divisors / fix ruff

* fix ruff in hypertile_xyz.py

* fix ruff - set comprehension

* hypertile_xyz: we don't need isnumeric check for AxisOption

* Update devices.py

fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097

* fix Auto focal point crop for opencv >= 4.8.x

autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception

* reformat file with uniform indentation

* Revert "Add FP32 fallback support on sd_vae_approx"

This reverts commit 58c1954.
Since the modification is expected to move to mac_specific.py
(AUTOMATIC1111#14046 (comment))

* Add FP32 fallback support on torch.nn.functional.interpolate

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```

* Fix the Ruff error about unused import

* Initial IPEX support

* add max-heigh/width to global-popup-inner

prevent the pop-up from being too big as to making exiting the pop-up impossible

* Close popups with escape key

* Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone

* Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load

* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page

* Disable ipex autocast due to its bad perf

* split UI settings page into many

* put code that can cause an exception into its own function for AUTOMATIC1111#14120

* Fix fp64

* extras tab batch: actually use original filename
preprocessing upscale: do not do an extra upscale step if it's not needed

* Remove webui-ipex-user.bat

* remove Train/Preprocessing tab and put all its functionality into extras batch images mode

* potential fix for AUTOMATIC1111#14172

* alternate implementation for unet forward replacement that does not depend on hijack being applied

* Fix `save_samples` being checked early when saving masked composite

* Re-add setting lost as part of e294e46

* rework mask and mask_composite logic

* Add import_hook hack to work around basicsr incompatibility

Fixes AUTOMATIC1111#13985

* Update launch_utils.py to fix wrong dep. checks and reinstalls

Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime.

In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example:

> launch_utils.run_pip("install ffmpeg-python", "required package")
[ Installing required package: "ffmpeg-python" ... ]
[ Installed ]

> launch_utils.is_installed("ffmpeg-python")
False

... which would actually return true with:

> launch_utils.is_installed("ffmpeg")
True

* Lint

* make webui not crash when running with --disable-all-extensions option

* update changelog

* repair old handler for postprocessing API

* repair old handler for postprocessing API in a way that doesn't break interface

* add hypertile infotext

* Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text()

remove clean_text()

* fix Inpaint Image Appears Behind Some UI Elements anapnoe#206

* fix side panel show/hide button hot zone does not use the entire width anapnoe#204

* Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes

Fix wrong implementation in network_oft

* Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution

Allow pasting in WIDTHxHEIGHT strings into the width/height fields

* Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id

Assign id for "extra_options". Replace numeric field with slider.

* Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles

Fix styles

* Merge pull request AUTOMATIC1111#14266 from kaalibro/dev

Re-add setting lost as part of e294e46

* Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding

[IPEX] Fix embedding and ControlNet

* Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer

add option: Live preview in full page image viewer

* Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison

change state dict comparison to ref compare

* Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev

AUTOMATIC1111#13354 : solve lora loading issue

* Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox

default False js_live_preview_in_modal_lightbox

* update to 1.7 from upstream

* Update README.md

* Update screenshot.png

* Update CITATION.cff

* update to latest version

* update to latest version

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: Ritesh Gangnani <riteshgangnani10>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: anapnoe <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>
martianunlimited added a commit to martianunlimited/stable-diffusion-webui-ux that referenced this issue Jan 25, 2024
* pull (#11)

* added option to play notification sound or not

* Convert (emphasis) to (emphasis:1.1)

per @SirVeggie's suggestion

* Make attention conversion optional

Fix square brackets multiplier

* put notification.mp3 option at the end of the page

* more general case of adding an infotext when no images have been generated

* use shallow copy for AUTOMATIC1111#13535

* remove duplicated code

* support webui.settings.bat

* Start / Restart generation by Ctrl (Alt) + Enter

Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter

* add an option to not print stack traces on ctrl+c.

* repair unload sd checkpoint button

* respect keyedit_precision_attention setting when converting from old (((attention))) syntax

* Update script.js

Exclude lambda

* Update script.js

LF instead CRLF

* Update script.js

* Add files via upload

LF

* wip incorrect OFT implementation

* inference working but SLOW

* faster by using cached R in forward

* faster by calculating R in updown and using cached R in forward

* refactor: fix constraint, re-use get_weight

* style: formatting

* style: fix ambiguous variable name

* rework some of changes for emphasis editing keys, force conversion of old-style emphasis

* fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)

* fix bug when using --gfpgan-models-path

* fix Blank line contains whitespace

* refactor: use forward hook instead of custom forward

* fix: return orig weights during updown, merge weights before forward

* fix: support multiplier, no forward pass hook

* style: cleanup oft

* fix: use merge_weight to cache value

* refactor: remove used OFT functions

* fix: multiplier applied twice in finalize_updown

* style: conform style

* Update prompts_from_file script to allow concatenating entries with the general prompt.

* linting issue

* call state.jobnext() before postproces*()

* Fix AUTOMATIC1111#13796

Fix comment error that makes understanding scheduling more confusing.

* test implementation based on kohaku diag-oft implementation

* detect diag_oft type

* no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch

* added accordion settings options

* Fix parenthesis auto selection

Fixes AUTOMATIC1111#13813

* Update requirements_versions.txt

* skip multihead attn for now

* refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb

* refactor: use same updown for both kohya OFT and LyCORIS diag-oft

* refactor: remove unused function

* correct a typo

modify "defaul" to "default"

* add a visible checkbox to input accordion

* eslint

* properly apply sort order for extra network cards when selected from dropdown
allow selection of default sort order in settings
remove 'Default' sort order, replace with 'Name'

* Add SSD-1B as a supported model

* Added memory clearance after deletion

* Use devices.torch_gc() instead of empty_cache()

* added compact prompt option

* compact prompt option disabled by default

* linter

* more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry

* fix img2img_tabs error

* fix exception related to the pix2pix

* Add option to set notification sound volume

* fix pix2pix producing bad results

* moved nested with to single line to remove extra tabs

* removed changes that weren't merged properly

* multiline with statement for readibility

* Update README.md

Modify the stablediffusion dependency address

* Update README.md

Modify the stablediffusion dependency address

* - opensuse compatibility

* Enable prompt hotkeys in style editor

* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+

* fix added accordion settings options

* ExitStack as alternative to suppress

* implementing script metadata and DAG sorting mechanism

* populate loaded_extensions from extension list instead

* reverse the extension load order so builtin extensions load earlier natively

* add hyperTile

https://github.com/tfernd/HyperTile

* remove the assumption of same name

* allow comma and whitespace as separator

* fix

* bug fix

* dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir

* Lint

* Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed

* Adds 'Path' sorting for Extra network cards

* fix gradio video component and canvas fit for inpaint

* hotfix: call shared.state.end() after postprocessing done

* Implement Hypertile

Co-Authored-By: Kieran Hunt <[email protected]>

* copy LDM VAE key from XL

* fix: ignore calc_scale() for COFT which has very small alpha

* feat: LyCORIS/kohya OFT network support

* convert/add hypertile options

* fix ruff - add newline

* Adds tqdm handler to logging_config.py for progress bar integration

* Take into account tqdm not being installed before first boot for logging

* actually adds handler to logging_config.py

* Fix critical issue - unet apply

* Fix inverted option issue

I'm pretty sure I was sleepy while implementing this

* set empty value for SD XL 3rd layer

* fix double gc and decoding with unet context

* feat: fix randn found element of type float at pos 2

Signed-off-by: storyicon <[email protected]>

* use metadata.ini for meta filename

* Option to show batch img2img results in UI

shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown

* save sysinfo as .json

GitHub now allows uploading of .json files in issues

* rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code

* added option for default behavior of dir buttons

* Add FP32 fallback support on sd_vae_approx

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the submodule may require additional modifications. The following is the example modification on the other submodule.

```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py

class Upsample(nn.Module):
..snip..
    def forward(self, x):
        assert x.shape[1] == self.channels
        if self.dims == 3:
            x = F.interpolate(
                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
            )
        else:
            try:
                x = F.interpolate(x, scale_factor=2, mode="nearest")
            except:
                x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
        if self.use_conv:
            x = self.conv(x)
        return x
..snip..
```

You can see the FP32 fallback execution as same as sd_vae_approx.py.

* fix  [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047

* Update ruff to 0.1.6

* Simplify restart_sampler (suggested by ruff)

* use extension name for determining an extension is installed in the index

* Move exception_records related methods to errors.py

* remove traceback in sysinfo

* move file

* rework hypertile into a built-in extension

* do not save HTML explanations from options page to config

* fix linter errors

* compact prompt layout: preserve scroll when switching between lora tabs

* json.dump(ensure_ascii=False)

improve json readability

* add categories to settings

* also consider extension url

* add Block component creation callback

* catch uncaught exception with ui creation scripts

prevent total webui crash

* Allow use of mutiple styles csv files

* bugfix for warning message (#6)

* bugfix for warning message (#6)

* bugfix for warning message

* bugfix error message

* Allow use of mutiple styles csv files
* AUTOMATIC1111#14122
Fix edge case where style text has multiple {prompt} placeholders
* AUTOMATIC1111#14005

* Support XYZ scripts / split hires path from unet

* cache divisors / fix ruff

* fix ruff in hypertile_xyz.py

* fix ruff - set comprehension

* hypertile_xyz: we don't need isnumeric check for AxisOption

* Update devices.py

fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097

* fix Auto focal point crop for opencv >= 4.8.x

autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception

* reformat file with uniform indentation

* Revert "Add FP32 fallback support on sd_vae_approx"

This reverts commit 58c1954.
Since the modification is expected to move to mac_specific.py
(AUTOMATIC1111#14046 (comment))

* Add FP32 fallback support on torch.nn.functional.interpolate

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```

* Fix the Ruff error about unused import

* Initial IPEX support

* add max-heigh/width to global-popup-inner

prevent the pop-up from being too big as to making exiting the pop-up impossible

* Close popups with escape key

* Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone

* Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load

* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page

* Disable ipex autocast due to its bad perf

* split UI settings page into many

* put code that can cause an exception into its own function for AUTOMATIC1111#14120

* Fix fp64

* extras tab batch: actually use original filename
preprocessing upscale: do not do an extra upscale step if it's not needed

* Remove webui-ipex-user.bat

* remove Train/Preprocessing tab and put all its functionality into extras batch images mode

* potential fix for AUTOMATIC1111#14172

* alternate implementation for unet forward replacement that does not depend on hijack being applied

* Fix `save_samples` being checked early when saving masked composite

* Re-add setting lost as part of e294e46

* rework mask and mask_composite logic

* Add import_hook hack to work around basicsr incompatibility

Fixes AUTOMATIC1111#13985

* Update launch_utils.py to fix wrong dep. checks and reinstalls

Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime.

In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example:

> launch_utils.run_pip("install ffmpeg-python", "required package")
[ Installing required package: "ffmpeg-python" ... ]
[ Installed ]

> launch_utils.is_installed("ffmpeg-python")
False

... which would actually return true with:

> launch_utils.is_installed("ffmpeg")
True

* Lint

* make webui not crash when running with --disable-all-extensions option

* update changelog

* repair old handler for postprocessing API

* repair old handler for postprocessing API in a way that doesn't break interface

* add hypertile infotext

* Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text()

remove clean_text()

* fix Inpaint Image Appears Behind Some UI Elements anapnoe#206

* fix side panel show/hide button hot zone does not use the entire width anapnoe#204

* Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes

Fix wrong implementation in network_oft

* Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution

Allow pasting in WIDTHxHEIGHT strings into the width/height fields

* Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id

Assign id for "extra_options". Replace numeric field with slider.

* Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles

Fix styles

* Merge pull request AUTOMATIC1111#14266 from kaalibro/dev

Re-add setting lost as part of e294e46

* Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding

[IPEX] Fix embedding and ControlNet

* Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer

add option: Live preview in full page image viewer

* Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison

change state dict comparison to ref compare

* Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev

AUTOMATIC1111#13354 : solve lora loading issue

* Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox

default False js_live_preview_in_modal_lightbox

* update to 1.7 from upstream

* Update README.md

* Update screenshot.png

* Update CITATION.cff

* update to latest version

* update to latest version

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: Ritesh Gangnani <riteshgangnani10>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: anapnoe <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>

* Z (#12)

* added option to play notification sound or not

* Convert (emphasis) to (emphasis:1.1)

per @SirVeggie's suggestion

* Make attention conversion optional

Fix square brackets multiplier

* put notification.mp3 option at the end of the page

* more general case of adding an infotext when no images have been generated

* use shallow copy for AUTOMATIC1111#13535

* remove duplicated code

* support webui.settings.bat

* Start / Restart generation by Ctrl (Alt) + Enter

Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter

* add an option to not print stack traces on ctrl+c.

* repair unload sd checkpoint button

* respect keyedit_precision_attention setting when converting from old (((attention))) syntax

* Update script.js

Exclude lambda

* Update script.js

LF instead CRLF

* Update script.js

* Add files via upload

LF

* wip incorrect OFT implementation

* inference working but SLOW

* faster by using cached R in forward

* faster by calculating R in updown and using cached R in forward

* refactor: fix constraint, re-use get_weight

* style: formatting

* style: fix ambiguous variable name

* rework some of changes for emphasis editing keys, force conversion of old-style emphasis

* fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)

* fix bug when using --gfpgan-models-path

* fix Blank line contains whitespace

* refactor: use forward hook instead of custom forward

* fix: return orig weights during updown, merge weights before forward

* fix: support multiplier, no forward pass hook

* style: cleanup oft

* fix: use merge_weight to cache value

* refactor: remove used OFT functions

* fix: multiplier applied twice in finalize_updown

* style: conform style

* Update prompts_from_file script to allow concatenating entries with the general prompt.

* linting issue

* call state.jobnext() before postproces*()

* Fix AUTOMATIC1111#13796

Fix comment error that makes understanding scheduling more confusing.

* test implementation based on kohaku diag-oft implementation

* detect diag_oft type

* no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch

* added accordion settings options

* Fix parenthesis auto selection

Fixes AUTOMATIC1111#13813

* Update requirements_versions.txt

* skip multihead attn for now

* refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb

* refactor: use same updown for both kohya OFT and LyCORIS diag-oft

* refactor: remove unused function

* correct a typo

modify "defaul" to "default"

* add a visible checkbox to input accordion

* eslint

* properly apply sort order for extra network cards when selected from dropdown
allow selection of default sort order in settings
remove 'Default' sort order, replace with 'Name'

* Add SSD-1B as a supported model

* Added memory clearance after deletion

* Use devices.torch_gc() instead of empty_cache()

* added compact prompt option

* compact prompt option disabled by default

* linter

* more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry

* fix img2img_tabs error

* fix exception related to the pix2pix

* Add option to set notification sound volume

* fix pix2pix producing bad results

* moved nested with to single line to remove extra tabs

* removed changes that weren't merged properly

* multiline with statement for readibility

* Update README.md

Modify the stablediffusion dependency address

* Update README.md

Modify the stablediffusion dependency address

* - opensuse compatibility

* Enable prompt hotkeys in style editor

* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+

* fix added accordion settings options

* ExitStack as alternative to suppress

* implementing script metadata and DAG sorting mechanism

* populate loaded_extensions from extension list instead

* reverse the extension load order so builtin extensions load earlier natively

* add hyperTile

https://github.com/tfernd/HyperTile

* remove the assumption of same name

* allow comma and whitespace as separator

* fix

* bug fix

* dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir

* Lint

* Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed

* Adds 'Path' sorting for Extra network cards

* fix gradio video component and canvas fit for inpaint

* hotfix: call shared.state.end() after postprocessing done

* Implement Hypertile

Co-Authored-By: Kieran Hunt <[email protected]>

* copy LDM VAE key from XL

* fix: ignore calc_scale() for COFT which has very small alpha

* feat: LyCORIS/kohya OFT network support

* convert/add hypertile options

* fix ruff - add newline

* Adds tqdm handler to logging_config.py for progress bar integration

* Take into account tqdm not being installed before first boot for logging

* actually adds handler to logging_config.py

* Fix critical issue - unet apply

* Fix inverted option issue

I'm pretty sure I was sleepy while implementing this

* set empty value for SD XL 3rd layer

* fix double gc and decoding with unet context

* feat: fix randn found element of type float at pos 2

Signed-off-by: storyicon <[email protected]>

* use metadata.ini for meta filename

* Option to show batch img2img results in UI

shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown

* save sysinfo as .json

GitHub now allows uploading of .json files in issues

* rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code

* added option for default behavior of dir buttons

* Add FP32 fallback support on sd_vae_approx

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the submodule may require additional modifications. The following is the example modification on the other submodule.

```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py

class Upsample(nn.Module):
..snip..
    def forward(self, x):
        assert x.shape[1] == self.channels
        if self.dims == 3:
            x = F.interpolate(
                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
            )
        else:
            try:
                x = F.interpolate(x, scale_factor=2, mode="nearest")
            except:
                x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
        if self.use_conv:
            x = self.conv(x)
        return x
..snip..
```

You can see the FP32 fallback execution as same as sd_vae_approx.py.

* fix  [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047

* Update ruff to 0.1.6

* Simplify restart_sampler (suggested by ruff)

* use extension name for determining an extension is installed in the index

* Move exception_records related methods to errors.py

* remove traceback in sysinfo

* move file

* rework hypertile into a built-in extension

* do not save HTML explanations from options page to config

* fix linter errors

* compact prompt layout: preserve scroll when switching between lora tabs

* json.dump(ensure_ascii=False)

improve json readability

* add categories to settings

* also consider extension url

* add Block component creation callback

* catch uncaught exception with ui creation scripts

prevent total webui crash

* Allow use of mutiple styles csv files

* bugfix for warning message (#6)

* bugfix for warning message (#6)

* bugfix for warning message

* bugfix error message

* Allow use of mutiple styles csv files
* AUTOMATIC1111#14122
Fix edge case where style text has multiple {prompt} placeholders
* AUTOMATIC1111#14005

* Support XYZ scripts / split hires path from unet

* cache divisors / fix ruff

* fix ruff in hypertile_xyz.py

* fix ruff - set comprehension

* hypertile_xyz: we don't need isnumeric check for AxisOption

* Update devices.py

fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097

* fix Auto focal point crop for opencv >= 4.8.x

autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception

* reformat file with uniform indentation

* Revert "Add FP32 fallback support on sd_vae_approx"

This reverts commit 58c1954.
Since the modification is expected to move to mac_specific.py
(AUTOMATIC1111#14046 (comment))

* Add FP32 fallback support on torch.nn.functional.interpolate

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```

* Fix the Ruff error about unused import

* Initial IPEX support

* add max-heigh/width to global-popup-inner

prevent the pop-up from being too big as to making exiting the pop-up impossible

* Close popups with escape key

* Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone

* Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load

* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page

* Disable ipex autocast due to its bad perf

* split UI settings page into many

* put code that can cause an exception into its own function for AUTOMATIC1111#14120

* Fix fp64

* extras tab batch: actually use original filename
preprocessing upscale: do not do an extra upscale step if it's not needed

* Remove webui-ipex-user.bat

* remove Train/Preprocessing tab and put all its functionality into extras batch images mode

* potential fix for AUTOMATIC1111#14172

* alternate implementation for unet forward replacement that does not depend on hijack being applied

* Fix `save_samples` being checked early when saving masked composite

* Re-add setting lost as part of e294e46

* rework mask and mask_composite logic

* Add import_hook hack to work around basicsr incompatibility

Fixes AUTOMATIC1111#13985

* Update launch_utils.py to fix wrong dep. checks and reinstalls

Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime.

In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example:

> launch_utils.run_pip("install ffmpeg-python", "required package")
[ Installing required package: "ffmpeg-python" ... ]
[ Installed ]

> launch_utils.is_installed("ffmpeg-python")
False

... which would actually return true with:

> launch_utils.is_installed("ffmpeg")
True

* Lint

* make webui not crash when running with --disable-all-extensions option

* update changelog

* repair old handler for postprocessing API

* repair old handler for postprocessing API in a way that doesn't break interface

* add hypertile infotext

* Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text()

remove clean_text()

* fix Inpaint Image Appears Behind Some UI Elements anapnoe#206

* fix side panel show/hide button hot zone does not use the entire width anapnoe#204

* Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes

Fix wrong implementation in network_oft

* Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution

Allow pasting in WIDTHxHEIGHT strings into the width/height fields

* Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id

Assign id for "extra_options". Replace numeric field with slider.

* Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles

Fix styles

* Merge pull request AUTOMATIC1111#14266 from kaalibro/dev

Re-add setting lost as part of e294e46

* Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding

[IPEX] Fix embedding and ControlNet

* Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer

add option: Live preview in full page image viewer

* Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison

change state dict comparison to ref compare

* Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev

AUTOMATIC1111#13354 : solve lora loading issue

* Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox

default False js_live_preview_in_modal_lightbox

* update to 1.7 from upstream

* Update README.md

* Update screenshot.png

* Update CITATION.cff

* update to latest version

* update to latest version

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: Ritesh Gangnani <riteshgangnani10>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: anapnoe <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: anapnoe <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>
martianunlimited added a commit to martianunlimited/stable-diffusion-webui-ux that referenced this issue Jan 25, 2024
* fix IndexError: list index out of range error interrupted while postprocess

* added option to play notification sound or not

* Convert (emphasis) to (emphasis:1.1)

per @SirVeggie's suggestion

* Make attention conversion optional

Fix square brackets multiplier

* put notification.mp3 option at the end of the page

* more general case of adding an infotext when no images have been generated

* use shallow copy for AUTOMATIC1111#13535

* remove duplicated code

* support webui.settings.bat

* Start / Restart generation by Ctrl (Alt) + Enter

Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter

* add an option to not print stack traces on ctrl+c.

* repair unload sd checkpoint button

* respect keyedit_precision_attention setting when converting from old (((attention))) syntax

* Update script.js

Exclude lambda

* Update script.js

LF instead CRLF

* Update script.js

* Add files via upload

LF

* wip incorrect OFT implementation

* inference working but SLOW

* faster by using cached R in forward

* faster by calculating R in updown and using cached R in forward

* refactor: fix constraint, re-use get_weight

* style: formatting

* style: fix ambiguous variable name

* rework some of changes for emphasis editing keys, force conversion of old-style emphasis

* fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)

* fix bug when using --gfpgan-models-path

* fix Blank line contains whitespace

* refactor: use forward hook instead of custom forward

* fix: return orig weights during updown, merge weights before forward

* fix: support multiplier, no forward pass hook

* style: cleanup oft

* fix: use merge_weight to cache value

* refactor: remove used OFT functions

* fix: multiplier applied twice in finalize_updown

* style: conform style

* Update prompts_from_file script to allow concatenating entries with the general prompt.

* linting issue

* call state.jobnext() before postproces*()

* Fix AUTOMATIC1111#13796

Fix comment error that makes understanding scheduling more confusing.

* test implementation based on kohaku diag-oft implementation

* detect diag_oft type

* no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch

* added accordion settings options

* Fix parenthesis auto selection

Fixes AUTOMATIC1111#13813

* Update requirements_versions.txt

* skip multihead attn for now

* refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb

* refactor: use same updown for both kohya OFT and LyCORIS diag-oft

* refactor: remove unused function

* correct a typo

modify "defaul" to "default"

* add a visible checkbox to input accordion

* eslint

* properly apply sort order for extra network cards when selected from dropdown
allow selection of default sort order in settings
remove 'Default' sort order, replace with 'Name'

* Add SSD-1B as a supported model

* Added memory clearance after deletion

* Use devices.torch_gc() instead of empty_cache()

* added compact prompt option

* compact prompt option disabled by default

* linter

* more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry

* fix img2img_tabs error

* fix exception related to the pix2pix

* Add option to set notification sound volume

* fix pix2pix producing bad results

* moved nested with to single line to remove extra tabs

* removed changes that weren't merged properly

* multiline with statement for readibility

* Update README.md

Modify the stablediffusion dependency address

* Update README.md

Modify the stablediffusion dependency address

* - opensuse compatibility

* Enable prompt hotkeys in style editor

* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+

* fix added accordion settings options

* ExitStack as alternative to suppress

* implementing script metadata and DAG sorting mechanism

* populate loaded_extensions from extension list instead

* reverse the extension load order so builtin extensions load earlier natively

* add hyperTile

https://github.com/tfernd/HyperTile

* remove the assumption of same name

* allow comma and whitespace as separator

* fix

* bug fix

* dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir

* Lint

* Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed

* Adds 'Path' sorting for Extra network cards

* hotfix: call shared.state.end() after postprocessing done

* Implement Hypertile

Co-Authored-By: Kieran Hunt <[email protected]>

* copy LDM VAE key from XL

* fix: ignore calc_scale() for COFT which has very small alpha

* feat: LyCORIS/kohya OFT network support

* convert/add hypertile options

* fix ruff - add newline

* Adds tqdm handler to logging_config.py for progress bar integration

* Take into account tqdm not being installed before first boot for logging

* actually adds handler to logging_config.py

* Fix critical issue - unet apply

* Fix inverted option issue

I'm pretty sure I was sleepy while implementing this

* set empty value for SD XL 3rd layer

* fix double gc and decoding with unet context

* feat: fix randn found element of type float at pos 2

Signed-off-by: storyicon <[email protected]>

* use metadata.ini for meta filename

* Option to show batch img2img results in UI

shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown

* save sysinfo as .json

GitHub now allows uploading of .json files in issues

* rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code

* added option for default behavior of dir buttons

* Add FP32 fallback support on sd_vae_approx

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the submodule may require additional modifications. The following is the example modification on the other submodule.

```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py

class Upsample(nn.Module):
..snip..
    def forward(self, x):
        assert x.shape[1] == self.channels
        if self.dims == 3:
            x = F.interpolate(
                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
            )
        else:
            try:
                x = F.interpolate(x, scale_factor=2, mode="nearest")
            except:
                x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
        if self.use_conv:
            x = self.conv(x)
        return x
..snip..
```

You can see the FP32 fallback execution as same as sd_vae_approx.py.

* fix  [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047

* Update ruff to 0.1.6

* Simplify restart_sampler (suggested by ruff)

* use extension name for determining an extension is installed in the index

* Move exception_records related methods to errors.py

* remove traceback in sysinfo

* move file

* rework hypertile into a built-in extension

* do not save HTML explanations from options page to config

* fix linter errors

* compact prompt layout: preserve scroll when switching between lora tabs

* json.dump(ensure_ascii=False)

improve json readability

* add categories to settings

* also consider extension url

* add Block component creation callback

* catch uncaught exception with ui creation scripts

prevent total webui crash

* Allow use of mutiple styles csv files

* bugfix for warning message (#6)

* bugfix for warning message (#6)

* bugfix for warning message

* bugfix error message

* Allow use of mutiple styles csv files
* AUTOMATIC1111#14122
Fix edge case where style text has multiple {prompt} placeholders
* AUTOMATIC1111#14005

* Support XYZ scripts / split hires path from unet

* cache divisors / fix ruff

* fix ruff in hypertile_xyz.py

* fix ruff - set comprehension

* hypertile_xyz: we don't need isnumeric check for AxisOption

* Update devices.py

fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097

* fix Auto focal point crop for opencv >= 4.8.x

autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception

* reformat file with uniform indentation

* Revert "Add FP32 fallback support on sd_vae_approx"

This reverts commit 58c1954.
Since the modification is expected to move to mac_specific.py
(AUTOMATIC1111#14046 (comment))

* Add FP32 fallback support on torch.nn.functional.interpolate

This tries to execute interpolate with FP32 if it failed.

Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:

```
"torch/nn/functional.py", line 3931, in interpolate
        return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```

In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.

Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```

* Fix the Ruff error about unused import

* Initial IPEX support

* add max-heigh/width to global-popup-inner

prevent the pop-up from being too big as to making exiting the pop-up impossible

* Close popups with escape key

* Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone

* Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load

* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page

* Disable ipex autocast due to its bad perf

* split UI settings page into many

* put code that can cause an exception into its own function for AUTOMATIC1111#14120

* Fix fp64

* extras tab batch: actually use original filename
preprocessing upscale: do not do an extra upscale step if it's not needed

* Remove webui-ipex-user.bat

* remove Train/Preprocessing tab and put all its functionality into extras batch images mode

* potential fix for AUTOMATIC1111#14172

* alternate implementation for unet forward replacement that does not depend on hijack being applied

* Fix `save_samples` being checked early when saving masked composite

* Re-add setting lost as part of e294e46

* rework mask and mask_composite logic

* Add import_hook hack to work around basicsr incompatibility

Fixes AUTOMATIC1111#13985

* Update launch_utils.py to fix wrong dep. checks and reinstalls

Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime.

In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example:

> launch_utils.run_pip("install ffmpeg-python", "required package")
[ Installing required package: "ffmpeg-python" ... ]
[ Installed ]

> launch_utils.is_installed("ffmpeg-python")
False

... which would actually return true with:

> launch_utils.is_installed("ffmpeg")
True

* Lint

* make webui not crash when running with --disable-all-extensions option

* update changelog

* repair old handler for postprocessing API

* repair old handler for postprocessing API in a way that doesn't break interface

* add hypertile infotext

* Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text()

remove clean_text()

* fix Inpaint Image Appears Behind Some UI Elements anapnoe#206

* fix side panel show/hide button hot zone does not use the entire width anapnoe#204

* Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes

Fix wrong implementation in network_oft

* Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution

Allow pasting in WIDTHxHEIGHT strings into the width/height fields

* Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id

Assign id for "extra_options". Replace numeric field with slider.

* Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles

Fix styles

* Merge pull request AUTOMATIC1111#14266 from kaalibro/dev

Re-add setting lost as part of e294e46

* Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding

[IPEX] Fix embedding and ControlNet

* Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer

add option: Live preview in full page image viewer

* Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison

change state dict comparison to ref compare

* Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev

AUTOMATIC1111#13354 : solve lora loading issue

* Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox

default False js_live_preview_in_modal_lightbox

* update to 1.7 from upstream

* Update README.md

* Update screenshot.png

* Update CITATION.cff

* update to latest version

* update to latest version

---------

Signed-off-by: storyicon <[email protected]>
Co-authored-by: Won-Kyu Park <[email protected]>
Co-authored-by: Gleb Alekseev <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Khachatur Avanesian <[email protected]>
Co-authored-by: v0xie <[email protected]>
Co-authored-by: avantcontra <[email protected]>
Co-authored-by: David Benson <[email protected]>
Co-authored-by: Meerkov <[email protected]>
Co-authored-by: Emily Zeng <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: gibiee <[email protected]>
Co-authored-by: Ritesh Gangnani <riteshgangnani10>
Co-authored-by: GerryDE <[email protected]>
Co-authored-by: fuchen.ljl <[email protected]>
Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <[email protected]>
Co-authored-by: wfjsw <[email protected]>
Co-authored-by: aria1th <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: kaalibro <[email protected]>
Co-authored-by: AngelBottomless <[email protected]>
Co-authored-by: Kieran Hunt <[email protected]>
Co-authored-by: Lucas Daniel Velazquez M <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: storyicon <[email protected]>
Co-authored-by: Tom Haelbich <[email protected]>
Co-authored-by: hidenorly <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Charlie Joynt <[email protected]>
Co-authored-by: obsol <[email protected]>
Co-authored-by: Nuullll <[email protected]>
Co-authored-by: MrCheeze <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: illtellyoulater <[email protected]>
Co-authored-by: anapnoe <[email protected]>
ruchej pushed a commit to ruchej/stable-diffusion-webui that referenced this issue Sep 30, 2024
fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN

AUTOMATIC1111#14097
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

4 participants