-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upscale or Variation section doesn't work #2231
Comments
Which Nvidia driver version are you using? |
Right now I have version 551.23, which is the latest and "game ready". |
Double-checking: which CUDA version do you use, 12.1? Seems like your Titan X would support DSA + CUDA 12, but your version doesn't support DSA. Can you please also check the output of |
I just did a For some reason
|
@Meddi1980 It's hard to debug when not being able to reproduce on a 30 series card. Maybe somebody with a 900 series GPU can relate and reproduce this issue of yours. |
@mashb1t After a long trial and error session, I found out that to run
EDIT: In Fooocus I'm still getting the exact same error when trying anything in the Upscale or Variation section. And |
@Meddi1980 did you manage to fix it? |
@mashb1t No, unfortunately. It seems to be a more widespread problem and it’s not just my GPU. It’s also not just in the Upscale or Variation section, but also after an ongoing render is canceled or skipped. |
@Meddi1980 damn, wish i could help you somehow. |
Read Troubleshoot
[x] I admit that I have read the Troubleshoot before making this issue.
None of the options in the Upscale or Variation section work. It gives me this error:
All the other sections (Image Prompt, Inpaint or Outpaint and Describe) do work.
Things that I've tried so far:
Any help would be appreciated.
Full Console Log
C:\Users[USER NAME]\Fooocus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--preset', 'realistic']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Loaded preset: C:\Users[USER NAME]\Fooocus\Fooocus\presets\realistic.json
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.Total VRAM 12288 MB, total RAM 65277 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX TITAN X : native
VAE dtype: torch.float32
Using xformers cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: C:\Users[USER NAME]\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Users[USER NAME]\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors].
Loaded LoRA [C:\Users[USER NAME]\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [C:\Users[USER NAME]\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.
Loaded LoRA [C:\Users[USER NAME]\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [C:\Users[USER NAME]\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.71 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 1575247925491012725
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
[Vary] Image is resized because it is too big.
[Fooocus] VAE encoding ...
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.25 seconds
Traceback (most recent call last):
File "C:\Users[USER NAME]\Fooocus\Fooocus\modules\async_worker.py", line 823, in worker
handler(task)
File "C:\Users[USER NAME]\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users[USER NAME]\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users[USER NAME]\Fooocus\Fooocus\modules\async_worker.py", line 472, in handler
initial_latent = core.encode_vae(vae=candidate_vae, pixels=initial_pixels)
File "C:\Users[USER NAME]\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users[USER NAME]\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users[USER NAME]\Fooocus\Fooocus\modules\core.py", line 176, in encode_vae
return opVAEEncode.encode(pixels=pixels, vae=vae)[0]
File "C:\Users[USER NAME]\Fooocus\Fooocus\ldm_patched\contrib\external.py", line 306, in encode
t = vae.encode(pixels[:,:,:,:3])
File "C:\Users[USER NAME]\Fooocus\Fooocus\ldm_patched\modules\sd.py", line 265, in encode
samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
File "C:\Users[USER NAME]\Fooocus\Fooocus\ldm_patched\ldm\models\autoencoder.py", line 181, in encode
z = self.encoder(x)
File "C:\Users[USER NAME]\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, **kwargs)
File "C:\Users[USER NAME]\Fooocus\Fooocus\ldm_patched\ldm\modules\diffusionmodules\model.py", line 536, in forward
h = nonlinearity(h)
File "C:\Users[USER NAME]\Fooocus\Fooocus\ldm_patched\ldm\modules\diffusionmodules\model.py", line 40, in nonlinearity
return xtorch.sigmoid(x)
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Total time: 14.89 seconds
The text was updated successfully, but these errors were encountered: