Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when Using SDXL-control-lora with 6GB VRAM, "Ran out of memory" #1781

Open
youyegit opened this issue Oct 18, 2023 · 8 comments
Open

when Using SDXL-control-lora with 6GB VRAM, "Ran out of memory" #1781

youyegit opened this issue Oct 18, 2023 · 8 comments

Comments

@youyegit
Copy link

I have tested SDXL in comfyui with RTX2060 6G,
when I use "sai_xl_canny_128lora.safetensors" or "sai_xl_depth_128lora.safetensors", it will show "Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding."
but when I use "diffusers_xl_canny_small.safetensors" or "diffusers_xl_depth_small.safetensors", it works well.

when I use clip_vison, it works well too.

So the problem may come from comfyui's nodes. This maybe a bug.

Thanks for the great job and hope comfyui better.

@youyegit
Copy link
Author

and in comfyui user interface , it show the error:

Error occurred when executing VAEDecode:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 4.58 GiB
Requested : 256.00 MiB
Device limit : 6.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\Z_comfyui_2023-10-17\ComfyUI\nodes.py", line 267, in decode
return (vae.decode(samples["samples"]), )
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 206, in decode
pixel_samples = self.decode_tiled_(samples_in)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 173, in decode_tiled_
comfy.utils.tiled_scale(samples, decode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = 8, pbar = pbar) +
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\utils.py", line 395, in tiled_scale
ps = function(s_in).cpu()
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 170, in
decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float()
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\models\autoencoder.py", line 94, in decode
dec = self.decoder(z)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 726, in forward
h = self.up[i_level].upsample(h)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 72, in forward
x = self.conv(x)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,

@youyegit
Copy link
Author

This the log of comfyui when using "sai_xl_canny_128lora.safetensors" :

Python 3.10.6
pip 23.1.2 from E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\pip (python 3.10)
** ComfyUI start up time: 2023-10-18 13:03:59.815647

Prestartup times for custom nodes:
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 6144 MB, total RAM 32489 MB
xformers version: 0.0.20
Forcing FP16.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : cudaMallocAsync
VAE dtype: torch.float32
disabling upcasting of attention
Using xformers cross attention
Setting temp directory to: E:\Z_comfyui_2023-10-17\temp\temp
Adding extra search path checkpoints E:/AI/SD/sd-webui/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path configs E:/AI/SD/sd-webui/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path vae E:/AI/SD/sd-webui/stable-diffusion-webui/models/VAE
Adding extra search path loras E:/AI/SD/sd-webui/stable-diffusion-webui/models/Lora
Adding extra search path upscale_models E:/AI/SD/sd-webui/stable-diffusion-webui/models/ESRGAN
Adding extra search path upscale_models E:/AI/SD/sd-webui/stable-diffusion-webui/models/SwinIR
Adding extra search path embeddings E:/AI/SD/sd-webui/stable-diffusion-webui/embeddings
Adding extra search path hypernetworks E:/AI/SD/sd-webui/stable-diffusion-webui/models/hypernetworks
Adding extra search path controlnet E:/AI/SD/sd-webui/stable-diffusion-webui/models/ControlNet
Adding extra search path checkpoints ../extra_model/models/checkpoints
Adding extra search path configs ../extra_model/models/configs
Adding extra search path vae ../extra_model/models/vae
Adding extra search path loras ../extra_model/models/loras
Adding extra search path clip_vision ../extra_model/models/clip_vision
Adding extra search path controlnet ../extra_model/models/controlnet
Adding extra search path custom_nodes ../extra_model/custom_nodes

Loading: ComfyUI-Manager (V0.30.3)
ComfyUI Revision: 1479 [f8032cd]
Registered sys.path: ['E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'E:\Z_comfyui_2023-10-17\ComfyUI\comfy', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\git\ext\gitdb', 'E:\Z_comfyui_2023-10-17\ComfyUI', 'E:\Z_comfyui_2023-10-17\python_embeded\python310.zip', 'E:\Z_comfyui_2023-10-17\python_embeded\DLLs', 'E:\Z_comfyui_2023-10-17\python_embeded\lib', 'E:\Z_comfyui_2023-10-17\python_embeded', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\win32', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\win32\lib', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\Pythonwin', '../..']
Registered sys.path: ['E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'E:\Z_comfyui_2023-10-17\ComfyUI\comfy', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\git\ext\gitdb', 'E:\Z_comfyui_2023-10-17\ComfyUI', 'E:\Z_comfyui_2023-10-17\python_embeded\python310.zip', 'E:\Z_comfyui_2023-10-17\python_embeded\DLLs', 'E:\Z_comfyui_2023-10-17\python_embeded\lib', 'E:\Z_comfyui_2023-10-17\python_embeded', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\win32', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\win32\lib', 'E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\Pythonwin', '../..', 'E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\tdxh_node_comfyui']
Total VRAM 6144 MB, total RAM 32489 MB
xformers version: 0.0.20
Forcing FP16.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : cudaMallocAsync
VAE dtype: torch.float32
WARNING: Ignoring invalid distribution -pencv-python (e:\z_comfyui_2023-10-17\python_embeded\lib\site-packages)
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite: ffmpeg_bin_path is set to: \ffmpeg-6.0-essentials_build\bin
WARNING: Ignoring invalid distribution -pencv-python (e:\z_comfyui_2023-10-17\python_embeded\lib\site-packages)
WAS Node Suite: Finished. Loaded 193 nodes successfully.

"Believe you deserve it and the universe will serve it." - Unknown

Import times for custom nodes:
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\reference_only.py
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\ControlNet-LLLite-ComfyUI
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\sdxl_prompt_styler
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
0.0 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\tdxh_node_comfyui
0.1 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\comfyui_controlnet_aux
0.4 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\ComfyUI-Manager
1.8 seconds: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\was_node_suite_comfyui

Setting output directory to: E:\Z_comfyui_2023-10-17\output
Setting input directory to: E:\Z_comfyui_2023-10-17\input
Starting server

To see the GUI go to: http://0.0.0.0:8188/
FETCH DATA from: E:\Z_comfyui_2023-10-17\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
got prompt
style: Default (Slightly Cinematic)
text_positive: school gate
text_negative:
text_positive_styled: cinematic still school gate . emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy
text_negative_styled: anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loading new
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Reshaping decoder.mid.attn_1.k.weight for SD format
Reshaping decoder.mid.attn_1.proj_out.weight for SD format
Reshaping decoder.mid.attn_1.q.weight for SD format
Reshaping decoder.mid.attn_1.v.weight for SD format
Reshaping encoder.mid.attn_1.k.weight for SD format
Reshaping encoder.mid.attn_1.proj_out.weight for SD format
Reshaping encoder.mid.attn_1.q.weight for SD format
Reshaping encoder.mid.attn_1.v.weight for SD format
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loading new
loading new
loading in lowvram mode 3017.959864616394
4%|███▌ | 1/25 [00:05<02:17, 5.71s/ 8%|██████▌ | 2/25 [00:10<01:58, 5.1 12%|█████████▌ | 3/25 [00:15<01:49, 16%|█████████████▌ | 4/25 [00:19<01:4 20%|████████████████▌ | 5/25 [00:24<0 24%|███████████████████▌ | 6/25 [00:2 28%|███████████████████████▌ | 7/25 [ 32%|██████████████████████████▌ | 8/2 36%|█████████████████████████████▌ | 40%|████████████████████████████████▌ 44%|████████████████████████████████████ 48%|███████████████████████████████████████▌ 52%|██████████████████████████████████████████▌ 56%|█████████████████████████████████████████████▌ 60%|█████████████████████████████████████████████████▌ 64%|████████████████████████████████████████████████████▌ 68%|███████████████████████████████████████████████████████▌ 72%|█████████████████████████████████████████████████████████ 76%|█████████████████████████████████████████████████████████ 80%|█████████████████████████████████████████████████████████ 84%|█████████████████████████████████████████████████████████ 88%|█████████████████████████████████████████████████████████ 92%|█████████████████████████████████████████████████████████ 96%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ █████████████████████████| 25/25 [01:50<00:00, 4.42s/it]
loading new
loading in lowvram mode 880.3931360244751
20%|████████████████▌ | 1/5 [00:05<0 40%|█████████████████████████████████▌ 60%|██████████████████████████████████████████████████▌ 80%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ ███████████████████████████| 5/5 [00:23<00:00, 4.66s/it]
Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 203, in decode
pixel_samples[x:x+batch_number] = torch.clamp((self.first_stage_model.decode(samples) + 1.0) / 2.0, min=0.0, max=1.0).cpu().float()
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\models\autoencoder.py", line 94, in decode
dec = self.decoder(z)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 726, in forward
h = self.up[i_level].upsample(h)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 72, in forward
x = self.conv(x)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 4.27 GiB
Requested : 1012.00 MiB
Device limit : 6.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\Z_comfyui_2023-10-17\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\Z_comfyui_2023-10-17\ComfyUI\nodes.py", line 267, in decode
return (vae.decode(samples["samples"]), )
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 206, in decode
pixel_samples = self.decode_tiled_(samples_in)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 173, in decode_tiled_
comfy.utils.tiled_scale(samples, decode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = 8, pbar = pbar) +
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\utils.py", line 395, in tiled_scale
ps = function(s_in).cpu()
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\sd.py", line 170, in
decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float()
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\models\autoencoder.py", line 94, in decode
dec = self.decoder(z)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 726, in forward
h = self.up[i_level].upsample(h)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 72, in forward
x = self.conv(x)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\Z_comfyui_2023-10-17\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 4.58 GiB
Requested : 256.00 MiB
Device limit : 6.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

Prompt executed in 203.19 seconds

@comfyanonymous
Copy link
Owner

Try temporarily removing all your custom nodes and see if you still have the issue.

@youyegit
Copy link
Author

I find this problem when I use Fooocus which is based on comfyUI , and Fooocus fix it just now , maybe it can help: lllyasviel/Fooocus#700

@youyegit
Copy link
Author

and I test "sai_xl_sketch_128lora.safetensors" and "sai_xl_recolor_128lora.safetensors", these two control-loras work well.

@comfyanonymous
Copy link
Owner

Should be fixed now.

@youyegit
Copy link
Author

It is fixed now !
awsome !

@ZSSsama
Copy link

ZSSsama commented Feb 24, 2024

I get this error when I'm using the XL model: EOFError: Ran out of input, what's going on

photo_2024-02-25_04-05-34

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants