Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Im getting this error clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] #3161

Open
iPhail87 opened this issue Mar 28, 2024 · 30 comments

Comments

@iPhail87
Copy link

i dont know if this affects anything when i generate i get this
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Loading 1 new model
C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)

@kakachiex2
Copy link

I'm getting the same errors

@hakaserver
Copy link

I have the same error when trying to merge models on comfy, using ModelMergeSimple and CheckpointSave.

[...]
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.transformer.text_projection.weight']
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
[...]

The model seems to successfully merge and save, it is even able to generate images correctly in the same workflow. But when inspecting the resulting model, using the stable-diffusion-webui-model-toolkit extension, it reports unet and vae being broken and the clip as junk (doesn't recognize it).
I noticed model merge was broken because I couldn't use the resulting model to train a LECO with p1atdev scripts anymore:

Traceback (most recent call last):
  File "D:\SDTraining\LECO\train_lora.py", line 343, in <module>
    main(args)
  File "D:\SDTraining\LECO\train_lora.py", line 330, in main
    train(config, prompts)
  File "D:\SDTraining\LECO\train_lora.py", line 57, in train
    tokenizer, text_encoder, unet, noise_scheduler = model_util.load_models(
  File "D:\SDTraining\LECO\model_util.py", line 114, in load_models
    tokenizer, text_encoder, unet = load_checkpoint_model(
  File "D:\SDTraining\LECO\model_util.py", line 83, in load_checkpoint_model
    pipe = StableDiffusionPipeline.from_single_file(
  File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\loaders.py", line 1922, in from_single_file
    pipe = download_from_original_stable_diffusion_ckpt(
  File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1534, in download_from_original_stable_diffusion_ckpt
    text_model = convert_ldm_clip_checkpoint(
  File "D:\SDTraining\LECO\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 802, in convert_ldm_clip_checkpoint
    set_module_tensor_to_device(text_model, param_name, "cpu", value=param)
  File "D:\SDTraining\LECO\venv\lib\site-packages\accelerate\utils\modeling.py", line 265, in set_module_tensor_to_device
    new_module = getattr(module, split)
  File "D:\SDTraining\LECO\venv\lib\site-packages\torch\nn\modules\module.py", line 1688, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'CLIPTextModel' object has no attribute 'text_projection'

@iPhail87
Copy link
Author

iPhail87 commented Apr 1, 2024

it only really happend after updating comfy, i did a fresh install and it was fine before updating i however did not try it before reinstalling the nodes so im not sure if it would be a custom node causing this

@kakachiex2
Copy link

Yes, all this problem happens after updating comfyui still can't use model merge and muti lora get bad generation and sometime noised image.

@vallestutz
Copy link

vallestutz commented Apr 4, 2024

I have the same problem: clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']

@kuzman123
Copy link

same here. everything worked begore update

@harmonics12
Copy link

I have the same issue...
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
I get a black (blank) image at the end of the render

@GHwion
Copy link

GHwion commented Apr 7, 2024

i dont know if this affects anything when i generate i get this clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Loading 1 new model C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)

I had asame error s using portable Comfyui with ipadapter-plus workflow. The issue is related to the two clipvision models for ipadapter-Plus. The two models have the same name "model.safetensor". I put them in seperate folders under another UI /model/clip-vision. Still does not work. I have to put the two folders in comfyui/model/clip-vision folder then the errors are gone. One of my folder name sdxl something. The other one sd1.5.

@firespace924
Copy link

我不知道这是否会影响我生成时的任何内容,我缺少这个剪辑: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] 正在加载 1 个新模型 C:\Users\heruv\ComfyUI \comfy\ldm\ module\attention.py:345: UserWarning: 1Torch 未使用 flash 注意进行编译。 (在 ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263 内部触发。) out = torch.nn.function.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=假)

我在使用便携式 Comfyui 和 ipadapter-plus 工作流程时遇到了同样的错误。该问题与 ipadapter-Plus 的两个 Clipvision 模型有关。这两个模型具有相同的名称“model.safetensor”。我将它们放在另一个 UI /model/clip-vision 下的单独一个文件夹中。还是不行。我将这两个文件夹放在 comfyui/model/clip-vision 文件夹中,然后错误就消失了。我的文件夹其中一个名称是 sdxl 之类的。另一台是 sd1.5。

我并没有使用IPadapter,只是刚刚启动comfyui,使用默认工作流生成了一张图,就出现如下报错了:clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']

@CHNtentes
Copy link

I got the same message but the output seems fine.

@UnSand
Copy link

UnSand commented Apr 9, 2024

I have seen this issue in a discussion group, and the result of their discussion was that some parameter names were mistakenly changed during the last update.

@Yangseok
Copy link

I have the same issue.

got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
0%| | 0/20 [00:00<?, ?it/s]terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f245b4ced87 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f245b47f75f in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f245b59f8a8 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: + 0x571d0 (0x7f245b5a41d0 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: + 0x58f14 (0x7f245b5a5f14 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: + 0x540210 (0x7f2459ecb210 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #6: + 0x649bf (0x7f245b4b39bf in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x21b (0x7f245b4acc8b in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f245b4ace39 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #9: + 0xe84377 (0x7f2410dbe377 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #10: at::native::_flash_attention_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optionalat::Tensor const&, std::optionalat::Tensor const&, long, long, double, bool, bool, std::optional) + 0x149 (0x7f2412cbb209 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #11: + 0x308a47b (0x7f2412fc447b in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #12: + 0x309520b (0x7f2412fcf20b in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #13: + 0x24ccc60 (0x7f2444a1fc60 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #14: at::_ops::_flash_attention_forward::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optionalat::Tensor const&, std::optionalat::Tensor const&, c10::SymInt, c10::SymInt, double, bool, bool, std::optional) + 0x30d (0x7f2444a045fd in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #15: at::native::_scaled_dot_product_flash_attention_cuda(at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, bool, std::optional) + 0x255 (0x7f2412cd5355 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #16: + 0x3072da5 (0x7f2412facda5 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #17: + 0x3072e7b (0x7f2412face7b in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #18: at::_ops::_scaled_dot_product_flash_attention::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, bool, std::optional) + 0x1ed (0x7f2444bb2e5d in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #19: at::native::scaled_dot_product_attention(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optionalat::Tensor const&, double, bool, std::optional) + 0x1500 (0x7f24448ba1b0 in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #20: + 0x2ed3fca (0x7f2445426fca in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #21: at::_ops::scaled_dot_product_attention::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optionalat::Tensor const&, double, bool, std::optional) + 0x20d (0x7f2444d5b83d in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #22: + 0x6bb1be (0x7f245a0461be in /home/ubuntu/.local/lib/python3.10/site-packages/torch/lib/libtorch_python.so)

@dlandry
Copy link

dlandry commented Apr 10, 2024

i dont know if this affects anything when i generate i get this clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Loading 1 new model C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)

I had asame error s using portable Comfyui with ipadapter-plus workflow. The issue is related to the two clipvision models for ipadapter-Plus. The two models have the same name "model.safetensor". I put them in seperate folders under another UI /model/clip-vision. Still does not work. I have to put the two folders in comfyui/model/clip-vision folder then the errors are gone. One of my folder name sdxl something. The other one sd1.5.

I am doing a clean install and after doing install the pytorch modules and then the requirements.txt, then installing my old models in the model directory and doing a quick generation to make sure it was working on the default workflow I installed Comfy Manager and started installing a bunch of my old custom_nodes one by one ... I was lloking for errors on installing and restarting, but I didn't pay attention to any warnings or errors in the image generation part until I had about 10 or 15 nodes installed when I noticed the same issue in my output at the fresh load of any model (after the initial load everything is fine, no warning.)

I looked at my old clip_vision directory and it has the models in separate doectories as well, so I copied them over (SDXL and SD1.5) refreshed, restarted, but the same warning/error message is still there.

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|██████████| 20/20 [00:00<00:00, 22.98it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 2.76 seconds

Not sure if that means anything, but I just thought I'd mention that your solution didn't solve my problem.

I will be doing another clean install later this week, or next week (testing out the Fedora Silverblue "immutable" system, to see if it is viable (which it seems to be, except maybe for DaVinci Resolve Studio, which seems to work until you try to do anything, then it can't seem to access the memory it recognizes in its own configuration (my guess is that this is the Studio version messing up on the licensing, able to activate using their license server, but unable to fully initialize the GPU access while in a container environment ... ) and will try to pay better attention for when this error starts to show up.

@dlandry
Copy link

dlandry commented Apr 11, 2024

it only really happend after updating comfy, i did a fresh install and it was fine before updating i however did not try it before reinstalling the nodes so im not sure if it would be a custom node causing this

I can confirm, that after a fresh install, the only thing added are models in models/checlpoints directory to let it generate anything ... the same message is still there.

Note: 'clip missing' message is only there when I first load a model (first run, or changing to a new model) and once the model is loaded it is silent until I use a new model.

@dlandry
Copy link

dlandry commented Apr 11, 2024

DOH!!!! Okay, since we have the code and lots of documentation I took a quick look:

Short answer: it's a logging message, so it can be ignored unless you don't believe me (and why should you?) or if you do believe me and are still curious!

Longer Answer: Comfy docs says it can auto config the model when you load it, and this message seems to come from part of that process (the load_checkpoint_guess_config()) so it is doing some kind of comparison between the model and it's 'database' of models, probably with the (partial?) purpose of doing the auto config.

I didn't look at the code in detail, but my guess is either this is a notice of the parameters that the model being loaded doesn't have implemented, or has no specific definition set for it in ComfyUI --- OR tihe model has those parameters, but ComfyUI doesn't handle them?

So it just logs the information and we can make of it what we will, although it would be nice to know what we should/can do with that info?

Does it mean those parameters are:

  • missing from the model and we can't do anything?
  • missing from ComfUI support and we still can't do anything?
  • not set to any 'model default' and Comfy sets it's own default?
  • not set by ComfyUI and uses the 'model default' if any.
  • something else?

If you have some coding skills you can look in that class/function I mentioned (it's in sd.py) and follow along to see if what you can learn about this that might be useful in your day to day understanding of how SD models work.

Just doing a fresh install of my workstation and VSCode is not setup as yet (and I don't remember all the git commands) so after I finish that I may just start looking at the code ... seems to be a potential source of some serious 'understanding' that might come in handy later ;-)

@Torcelllo
Copy link

I have this message - but does not stop successful output!

@Piscabo
Copy link

Piscabo commented May 6, 2024

I also have this problem, but I still get an image. What can I do to fix it PLS.
got prompt
[rgthree] Using rgthree's optimized recursive execution.
Prompt executor has been patched by Job Iterator!
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SDXLClipModel

@wouterdebie
Copy link

I'm getting this message too, but if I use the correct VAE things work as normal.

@audioscavenger
Copy link

so, no one knows why or where it's coming from?

@audioscavenger
Copy link

here is the solution: edit comfy\supported_models.py and make the pop_keys list empty:

    def process_clip_state_dict_for_saving(self, state_dict):
        # pop_keys = ["clip_l.transformer.text_projection.weight", "clip_l.logit_scale"]
        pop_keys = []
        for p in pop_keys:
            if p in state_dict:
                state_dict.pop(p)

you're welcome.

@wouterdebie
Copy link

Seems this was fixed in 93e876a

@Piscabo
Copy link

Piscabo commented May 10, 2024

Seems this was fixed in 93e876a

Nope it does not seem like that: (today)

Loading: ComfyUI-Manager (V2.30)

ComfyUI Revision: 2167 [cd07340] | Released on '2024-05-08'

clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']

@ltdrdata
Copy link
Collaborator

Nope it does not seem like that: (today)

93e876a is next commit of yours (cd07340).
You need to update again.

@aswordok
Copy link

The same question,Even though I was running the default workflow, when I updated comfyUI to the latest, the problem was gone.

@yangquanbiubiu
Copy link

是的,所有这些问题都发生在更新 comfyui 之后,仍然无法使用模型合并和多变生成错误,有时还会出现噪点图像。

me too, did u fix it? I got images with noise while using Stable Cascade and FLUX-dev

@LiJT
Copy link

LiJT commented Aug 17, 2024

I have same error

Loading 1 new model clip missing: ['text_projection.weight'] E:\ComfyUI-aki-v1.3\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) Requested to load Flux Loading 1 new model Using xformers attention in VAE Using xformers attention in VAE Requested to load AutoencodingEngine Loading 1 new model Prompt executed in 25.29 seconds Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /api/20/envelope/

@txhno
Copy link

txhno commented Oct 16, 2024

getting the same error today clip missing: ['text_projection.weight']

@GPU-server
Copy link

getting the same error today clip missing: ['text_projection.weight']

Did you find solution to this?

@robertobalestri
Copy link

Same problem here guys, i updated right now and something's broke, can't even generate

@GPU-server
Copy link

Maybe: #5260

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests