You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use compel with batches of long prompts / negative prompts. Here is the code:
import torch
from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("/model/path")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder, truncate_long_prompts=False)
prompts = [
"best quality, masterpiece, highres, 1girl,china dress,Beautiful face,upon_body, tyndall effect,photorealistic, dark studio, rim lighting, two tone lighting,(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh",
"best quality, masterpiece, highres, 1girl,china dress,Beautiful face,upon_body, tyndall effect,photorealistic, dark studio, rim lighting, two tone lighting,(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh",
"best quality, masterpiece, highres, 1girl,china dress,Beautiful face,upon_body, tyndall effect,photorealistic, dark studio, rim lighting, two tone lighting,(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh",
"best quality, masterpiece, highres, 1girl,china dress,Beautiful face,upon_body, tyndall effect,photorealistic, dark studio, rim lighting, two tone lighting,(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh"
]
negative_prompts = ["blur", "blur", "blur", "blur"]
prompt_embeds = compel(prompts)
negative_prompt_embeds = compel(negative_prompts)
[prompt_embeds, negative_prompt_embeds] = compel.pad_conditioning_tensors_to_same_length([prompt_embeds, negative_prompt_embeds])
The error returned by this last line is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/compel/compel.py", line 111, in pad_conditioning_tensors_to_same_length
c = torch.cat([c, empty_z], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 4 but got size 1 for tensor number 1 in the list.
The text was updated successfully, but these errors were encountered:
ahh. seems like pad_conditioning_tensors_to_same_length doesn't support batching input like that. i'll take a look at fixing that for the next release. in the meantime you could manually unpack the 4x77x768 tensors returned by compel() to lists of 4 1x77x768 tensors ..?
I am trying to use compel with batches of long prompts / negative prompts. Here is the code:
The error returned by this last line is:
The text was updated successfully, but these errors were encountered: