Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for the unclip (Variations) models, unclip-h and unclip-l #8958

Merged
merged 2 commits into from
Mar 28, 2023

Conversation

MrCheeze
Copy link
Contributor

Describe what this pull request is trying to achieve.

This adds support for stable-diffusion-2-1-unclip checkpoints that are used for generating image variations. (See also.)

It works in the same way as the current support for the SD2.0 depth model, in that you run it from the img2img tab, it extracts information from the input image (in this case, CLIP or OpenCLIP embeddings), and feeds those into the model in addition to the text prompt. Normally you would do this with denoising strength set to 1.0, since you don't actually want the normal img2img behaviour to have any influence on the generated image.

One thing I did not implement is any way to use this functionality but starting from random noise like txt2img does - which would probably generate more varied variations. This would be good for future work.

Additional notes and description of your changes

Key changes:

  • Autoloading the config. This is straightforward, except one complication: v2-1-stable-unclip-l-inference.yaml requires and hardcodes the path of a supplementary file at checkpoints/karlo_models/ViT-L-14_stats.th . I opted to hotpatch the config to point at models/karlo/ViT-L-14_stats.th instead; and to check the file into the repo itself, since it's very small - thoughts on this approach?
  • unclip_image_conditioning() - the key feature of this change. Adapted from Stability's script.
  • Changes all over sd_samplers_compvis and sd_samplers_kdiffusion to account for the fact that these models expect the conditioning to be provided differently (e.g. it needs to be in c_adm instead of c_concat). Not the cleanest code, but it works.

Environment this was tested in

Windows, NVIDIA GTX 1660 6GB

Screenshots or videos of your changes

image

@MrCheeze MrCheeze changed the title Add support for the Variations models (unclip-h and unclip-l) Add support for the unclip (Variations) models, unclip-h and unclip-l Mar 26, 2023
@hithereai
Copy link
Collaborator

hithereai commented Mar 26, 2023

Getting this error when xformers is not installed (using SDP attention):

File "D:\D-SD\AUTO\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
NameError: name 'xformers' is not defined

@hithereai
Copy link
Collaborator

Working now without xformers, thank you MrCheeze.

@idlebg
Copy link

idlebg commented Mar 28, 2023

Confirming its working also on my trained model on the SD noclip diffusers
Screenshot_397
Screenshot_396

And here is the original result from SD
https://clipdrop.co/stable-diffusion-reimagine

Screenshot_395

@AUTOMATIC1111 AUTOMATIC1111 merged commit f1db987 into AUTOMATIC1111:master Mar 28, 2023
brkirch pushed a commit to brkirch/stable-diffusion-webui that referenced this pull request Apr 5, 2023
Add support for the unclip (Variations) models, unclip-h and unclip-l
Rewinged added a commit to Rewinged/stable-diffusion-webui that referenced this pull request Apr 11, 2023
…s-model"

This reverts commit f1db987, reversing
changes made to e49c479.
Rewinged added a commit to Rewinged/stable-diffusion-webui that referenced this pull request Apr 11, 2023
Rewinged added a commit to Rewinged/stable-diffusion-webui that referenced this pull request May 10, 2023
…Cheeze/variations-model"""

This reverts commit 242d8f1.
Rewinged added a commit to Rewinged/stable-diffusion-webui that referenced this pull request May 10, 2023
@tiwowtimothy
Copy link

I kept getting 'NoneType is not callable' trying to use this

image

@amalnathm7
Copy link

amalnathm7 commented Dec 18, 2023

I kept getting 'NoneType is not callable' trying to use this

image

I was getting the same issue, until I removed the --medvram flag and now it works. When I checked out the source code, there was a place where the model is supposed to call an embedder method. However, an embedder method is not being assigned to the sd_model object when the model is allowed to use only less VRAM.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-And-Settings
The above links might help give you an idea why this is happening, or there may be a better solution for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants