-
Notifications
You must be signed in to change notification settings - Fork 6.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Im getting this error clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] #3161
Comments
I'm getting the same errors |
I have the same error when trying to merge models on comfy, using ModelMergeSimple and CheckpointSave.
The model seems to successfully merge and save, it is even able to generate images correctly in the same workflow. But when inspecting the resulting model, using the stable-diffusion-webui-model-toolkit extension, it reports unet and vae being broken and the clip as junk (doesn't recognize it).
|
it only really happend after updating comfy, i did a fresh install and it was fine before updating i however did not try it before reinstalling the nodes so im not sure if it would be a custom node causing this |
Yes, all this problem happens after updating comfyui still can't use model merge and muti lora get bad generation and sometime noised image. |
I have the same problem: clip missing: |
same here. everything worked begore update |
I have the same issue... |
I had asame error s using portable Comfyui with ipadapter-plus workflow. The issue is related to the two clipvision models for ipadapter-Plus. The two models have the same name "model.safetensor". I put them in seperate folders under another UI /model/clip-vision. Still does not work. I have to put the two folders in comfyui/model/clip-vision folder then the errors are gone. One of my folder name sdxl something. The other one sd1.5. |
我并没有使用IPadapter,只是刚刚启动comfyui,使用默认工作流生成了一张图,就出现如下报错了:clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] |
I got the same message but the output seems fine. |
I have seen this issue in a discussion group, and the result of their discussion was that some parameter names were mistakenly changed during the last update. |
I have the same issue. got prompt Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first): |
I am doing a clean install and after doing install the pytorch modules and then the requirements.txt, then installing my old models in the model directory and doing a quick generation to make sure it was working on the default workflow I installed Comfy Manager and started installing a bunch of my old custom_nodes one by one ... I was lloking for errors on installing and restarting, but I didn't pay attention to any warnings or errors in the image generation part until I had about 10 or 15 nodes installed when I noticed the same issue in my output at the fresh load of any model (after the initial load everything is fine, no warning.) I looked at my old clip_vision directory and it has the models in separate doectories as well, so I copied them over (SDXL and SD1.5) refreshed, restarted, but the same warning/error message is still there.
Not sure if that means anything, but I just thought I'd mention that your solution didn't solve my problem. I will be doing another clean install later this week, or next week (testing out the Fedora Silverblue "immutable" system, to see if it is viable (which it seems to be, except maybe for DaVinci Resolve Studio, which seems to work until you try to do anything, then it can't seem to access the memory it recognizes in its own configuration (my guess is that this is the Studio version messing up on the licensing, able to activate using their license server, but unable to fully initialize the GPU access while in a container environment ... ) and will try to pay better attention for when this error starts to show up. |
I can confirm, that after a fresh install, the only thing added are models in models/checlpoints directory to let it generate anything ... the same message is still there. Note: 'clip missing' message is only there when I first load a model (first run, or changing to a new model) and once the model is loaded it is silent until I use a new model. |
DOH!!!! Okay, since we have the code and lots of documentation I took a quick look: Short answer: it's a logging message, so it can be ignored unless you don't believe me (and why should you?) or if you do believe me and are still curious! Longer Answer: Comfy docs says it can auto config the model when you load it, and this message seems to come from part of that process (the load_checkpoint_guess_config()) so it is doing some kind of comparison between the model and it's 'database' of models, probably with the (partial?) purpose of doing the auto config. I didn't look at the code in detail, but my guess is either this is a notice of the parameters that the model being loaded doesn't have implemented, or has no specific definition set for it in ComfyUI --- OR tihe model has those parameters, but ComfyUI doesn't handle them? So it just logs the information and we can make of it what we will, although it would be nice to know what we should/can do with that info? Does it mean those parameters are:
If you have some coding skills you can look in that class/function I mentioned (it's in sd.py) and follow along to see if what you can learn about this that might be useful in your day to day understanding of how SD models work. Just doing a fresh install of my workstation and VSCode is not setup as yet (and I don't remember all the git commands) so after I finish that I may just start looking at the code ... seems to be a potential source of some serious 'understanding' that might come in handy later ;-) |
I have this message - but does not stop successful output! |
I also have this problem, but I still get an image. What can I do to fix it PLS. |
I'm getting this message too, but if I use the correct VAE things work as normal. |
so, no one knows why or where it's coming from? |
here is the solution: edit
you're welcome. |
Seems this was fixed in 93e876a |
The same question,Even though I was running the default workflow, when I updated comfyUI to the latest, the problem was gone. |
me too, did u fix it? I got images with noise while using Stable Cascade and FLUX-dev |
I have same error
|
getting the same error today |
Did you find solution to this? |
Same problem here guys, i updated right now and something's broke, can't even generate |
Maybe: #5260 |
i dont know if this affects anything when i generate i get this
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Loading 1 new model
C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
The text was updated successfully, but these errors were encountered: