Skip to content
This repository has been archived by the owner on Dec 25, 2023. It is now read-only.

supporting SDXL? #1

Open
etwoods opened this issue Aug 18, 2023 · 13 comments
Open

supporting SDXL? #1

etwoods opened this issue Aug 18, 2023 · 13 comments

Comments

@etwoods
Copy link

etwoods commented Aug 18, 2023

really awesome work thank guys for that

do you have a plan for supporting SDXL?

@etwoods
Copy link
Author

etwoods commented Aug 18, 2023

https://github.com/tencent-ailab/IP-Adapter
the IP-Adapter has release an version for SDXL

@laksjdjf
Copy link
Owner

I'm currently working on it.

@etwoods
Copy link
Author

etwoods commented Aug 18, 2023

I'm currently working on it.

Thanks a lot!

@killporter
Copy link

waiting for it! hopefully is not gonna need a lot of changes

@laksjdjf
Copy link
Owner

I probably did it. ^^^^^^

@killporter
Copy link

I probably did it. ^^^^^^

great! i'll test it out immediately

@killporter
Copy link

killporter commented Aug 18, 2023

I probably did it. ^^^^^^

any idea about this issue "
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
loaded straight to GPU
loading new
loading new
loading new
loading new
0% 0/30 [00:00<?, ?it/s]/usr/local/lib/python3.10/dist-packages/torchsde/_brownian/brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
67% 20/30 [00:17<00:08, 1.18it/s]loading new
67% 20/30 [00:21<00:10, 1.07s/it]
!!! Exception during processing !!!
Traceback (most recent call last):
File "/content/drive/MyDrive/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/nodes.py", line 49, in sample
return (core.ksampler_with_refiner(model, positive, negative, refiner_model, refiner_positive, refiner_negative, latent_image, noise_seed, steps, refiner_switch_step, cfg, sampler_name, scheduler, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise), )
File "/usr/local/lib/python3.10/dist-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/core.py", line 243, in ksampler_with_refiner
samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy,
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/samplers_advanced.py", line 236, in sample
samples = getattr(k_diffusion_sampling, "sample
{}".format(self.sampler))(self.model_k, noise, sigmas,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 615, in sample_dpmpp_2m_sde
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/samplers_advanced.py", line 223, in
k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/samplers_advanced.py", line 166, in callback
refiner_switch()
File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/samplers_advanced.py", line 155, in refiner_switch
comfy.model_management.load_model_gpu(self.refiner_model_patcher)
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 374, in load_model_gpu
return load_models_gpu([model])
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 368, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 259, in model_load
raise e
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 255, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
File "/content/drive/MyDrive/ComfyUI/comfy/sd.py", line 404, in patch_model
self.model.to(device_to)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1145, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 14.53 GiB
Requested : 81.00 MiB
Device limit : 14.75 GiB
Free (according to CUDA): 4.81 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

Prompt executed in 157.83 seconds

usually i have no issues of memory on 24 gb ram

@killporter
Copy link

killporter commented Aug 18, 2023

with searge again, usually works on t4 with extra ram:
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
loading new
100% 24/24 [00:20<00:00, 1.15it/s]
loading new
!!! Exception during processing !!!
Traceback (most recent call last):
File "/content/drive/MyDrive/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/SeargeSDXL/modules/sampling.py", line 199, in sample
return nodes.common_ksampler(refiner_model, noise_seed + noise_offset, steps, cfg, sampler_name, scheduler, refiner_positive, refiner_negative, base_result[0], denoise=denoise * refiner_strength, disable_noise=False, start_step=base_steps, last_step=steps, force_full_denoise=True)
File "/content/drive/MyDrive/ComfyUI/nodes.py", line 1176, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/content/drive/MyDrive/ComfyUI/comfy/sample.py", line 81, in sample
comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise.shape[0] * noise.shape[2] * noise.shape[3]))
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 368, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 259, in model_load
raise e
File "/content/drive/MyDrive/ComfyUI/comfy/model_management.py", line 255, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
File "/content/drive/MyDrive/ComfyUI/comfy/sd.py", line 404, in patch_model
self.model.to(device_to)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1145, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 14.52 GiB
Requested : 81.00 MiB
Device limit : 14.75 GiB
Free (according to CUDA): 4.81 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

workflow (10).txt

@laksjdjf
Copy link
Owner

Since clip_vision_g is quite large, could that be the cause?

@killporter
Copy link

Since clip_vision_g is quite large, could that be the cause?

could be but i feel a little stuck on this

@Linaghan34
Copy link

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
这个问题有遇到吗大家,尺寸不匹配,我换了1.5和SDXL的都不行

@laksjdjf
Copy link
Owner

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]). 这个问题有遇到吗大家,尺寸不匹配,我换了1.5和SDXL的都不行

did you use models for sdxl?

@Linaghan34
Copy link

引发运行时错误('加载 {}:\n\t{} 的state_dict中的错误。format( 运行时错误:重新采样器加载state_dict时出错:proj_in的大小不匹配.weight:使用形状火炬复制参数。尺寸([768, 1280])从检查点开始,当前模型中的形状是割炬。大小([768, 1664]).这个问题有遇到吗大家,尺寸不匹配,我换了1.5和SDXL的都不行

您是否使用 SDXL 模型?

是我更新了不稳定的controlnet插件,导致的,后面就正常了,感谢回复

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants