Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run image prompts with PyraCanny or CPDS #1266

Closed
acvcleitao opened this issue Dec 7, 2023 · 3 comments
Closed

Can't run image prompts with PyraCanny or CPDS #1266

acvcleitao opened this issue Dec 7, 2023 · 3 comments
Labels
bug Something isn't working question Further information is requested

Comments

@acvcleitao
Copy link

Describe the problem
When running any sort of image prompts with either PyraCanny or CPDS, the image is not generated after reaching the final step. I'm supposedly above the minnimum requirements, as I have a RTX 2060 with 6GB of VRAM and 16GB of RAM so I dont' see the problem here. Everything else works well. Image prompts of other kinds, text prompts, the 2 combined, different models, LoRa's, etc. But the PyraCanny and CPDS features aren't working. I saw on other issue threads (#700) that this had been fixed but it's not working on the current version (2.1.823).

Full Console Log
Loading 2 new models
Total time: 239.33 seconds
[Fooocus Model Management] Moving model(s) has taken 4.42 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 2639534983667704351
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.17 seconds
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.85 seconds
Requested to load Resampler
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.81 seconds
Requested to load To_KV
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.83 seconds
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 6.03 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 2860.869140625
[Fooocus Model Management] Moving model(s) has taken 17.56 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:26<00:00, 2.90s/it]
Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
Traceback (most recent call last):
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd.py", line 225, in decode
pixel_samples[x:x+batch_number] = torch.clamp((self.first_stage_model.decode(samples).cpu().float() + 1.0) / 2.0, min=0.0, max=1.0)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\models\autoencoder.py", line 201, in decode
dec = self.decoder(dec, **decoder_kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\model.py", line 638, in forward
h = self.up[i_level].upsample(h)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\model.py", line 71, in forward
x = self.conv(x)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1008.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 218.00 MiB is free. Of the allocated memory 4.31 GiB is allocated by PyTorch, and 447.73 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Fooocus\Fooocus\modules\async_worker.py", line 803, in worker
handler(task)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\async_worker.py", line 735, in handler
imgs = pipeline.process_diffusion(
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\default_pipeline.py", line 378, in process_diffusion
decoded_latent = core.decode_vae(vae=target_vae, latent_image=sampled_latent, tiled=tiled)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\core.py", line 169, in decode_vae
return opVAEDecode.decode(samples=latent_image, vae=vae)[0]
File "D:\Fooocus\Fooocus\backend\headless\nodes.py", line 267, in decode
return (vae.decode(samples["samples"]), )
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd.py", line 228, in decode
pixel_samples = self.decode_tiled_(samples_in)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd.py", line 195, in decode_tiled_
fcbh.utils.tiled_scale(samples, decode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = 8, pbar = pbar) +
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\utils.py", line 398, in tiled_scale
ps = function(s_in).cpu()
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd.py", line 192, in
decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float()
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\models\autoencoder.py", line 201, in decode
dec = self.decoder(dec, **decoder_kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\model.py", line 638, in forward
h = self.up[i_level].upsample(h)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\model.py", line 71, in forward
x = self.conv(x)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 4.63 GiB is allocated by PyTorch, and 511.61 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Total time: 119.81 seconds
[Fooocus Model Management] Moving model(s) has taken 3.13 seconds

@acvcleitao
Copy link
Author

This comment -> #700 (comment) confirms this is working in 2.1.703 with the same architecture. So something went wrong in between.

Repository owner deleted a comment from stubkan Dec 12, 2023
@mashb1t mashb1t added bug Something isn't working question Further information is requested labels Dec 29, 2023
@mashb1t
Copy link
Collaborator

mashb1t commented Dec 29, 2023

@acvcleitao are you still able to reproduce the issue with the latest version of Fooocus?

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 2, 2024

closing as stale, feel free to reopen when providing new information

@mashb1t mashb1t closed this as not planned Won't fix, can't repro, duplicate, stale Jan 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants