Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: HRfix tends to fail when used with inpainting models #6281

Open
1 task done
zero01101 opened this issue Jan 3, 2023 · 2 comments
Open
1 task done

[Bug]: HRfix tends to fail when used with inpainting models #6281

zero01101 opened this issue Jan 3, 2023 · 2 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@zero01101
Copy link

zero01101 commented Jan 3, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

when using an inpainting model with txt2img functions, the new HRfix "latent" and "latent (nearest)" upscalers crash with a runtimeerror

Traceback (most recent call last):
  File "D:\storage\stable-diffusion-webui\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "D:\storage\stable-diffusion-webui\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img
    processed = process_images(p)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 471, in process_images
    res = process_images_inner(p)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 576, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 759, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.steps, image_conditioning=image_conditioning)
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 503, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 439, in launch_sampling
    return func()
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 503, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 337, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1331, in forward
    xc = torch.cat([x] + c_concat, dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 128 but got size 64 for tensor number 1 in the list.

Steps to reproduce the problem

  1. Choose an inpainting model (runwayML 1.5, stabilityAI 2.0, etc)
  2. Go to txt2img tab
  3. Enable HRfix, select latent/latent (nearest) upscaler
  4. receive traceback

What should have happened?

probably shouldn't have crashed and instead resulted in an upscaled image; inpainting models can generate new images from nothing too y'know ;)

Commit where the problem happens

ef27a18

What platforms do you use to access UI ?

Windows, MacOS

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

--ckpt-dir C:\stable diffusion models --gfpgan-model GFPGANv1.4.pth --listen --autolaunch  --xformers --api --enable-insecure-extension-access --cors-allow-origins=http://127.0.0.1:3456,http://localhost:3456,https://zero01101.github.io,http://localhost:7860,http://192.168.120.20:3456

Additional information, context and logs

seems the final text of the error message (different tensor counts, though) was previously mentioned in #4446 but not regarding inpainting models, rather specifically mentioning non-square aspect ratios

@zero01101 zero01101 added the bug-report Report of a bug, yet to be confirmed label Jan 3, 2023
@zero01101 zero01101 changed the title [Bug]: New Latent upscalers fail against inpainting models [Bug]: New Latent HRfix upscalers fail when using inpainting models Jan 4, 2023
@zero01101
Copy link
Author

zero01101 commented Jan 6, 2023

so hey, turns out non-square output aspect ratios fail against inpainting models too

seems HRfix vs inpainting model can only really cope with 100% square integer-scaled outputs currently?

image

Traceback (most recent call last):
  File "D:\storage\stable-diffusion-webui\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "D:\storage\stable-diffusion-webui\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
    processed = process_images(p)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 479, in process_images
    res = process_images_inner(p)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 608, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "E:\storage\stable-diffusion-webui\modules\processing.py", line 845, in sample
    samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 271, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning))
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 176, in launch_sampling
    return func()
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 271, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning))
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 332, in decode
    x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
  File "E:\storage\stable-diffusion-webui\modules\sd_samplers.py", line 220, in p_sample_ddim_hook
    res = self.orig_p_sample_ddim(x_dec, cond, ts, unconditional_conditioning=unconditional_conditioning, *args, **kwargs)
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 211, in p_sample_ddim
    model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\storage\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\storage\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1331, in forward
    xc = torch.cat([x] + c_concat, dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 127 but got size 64 for tensor number 1 in the list.

@zero01101 zero01101 changed the title [Bug]: New Latent HRfix upscalers fail when using inpainting models [Bug]: HRfix tends to fail when used with inpainting models Jan 6, 2023
@levyfan
Copy link

levyfan commented Feb 21, 2023

I have the same problem. Any fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants