Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the Fooocus keeps just shutting down mid startup... #1657

Closed
Skratchyy opened this issue Dec 30, 2023 · 5 comments
Closed

the Fooocus keeps just shutting down mid startup... #1657

Skratchyy opened this issue Dec 30, 2023 · 5 comments
Labels
bug Something isn't working question Further information is requested

Comments

@Skratchyy
Copy link

Skratchyy commented Dec 30, 2023

Read Troubleshoot

[x] I admit that I have read the Troubleshoot before making this issue.

Describe the problem
I do not know what I have to do to fix this I have followed all steps of troubleshooting as well as system swap but still nothing .

Full Console Log
Paste full console log here. You will make our job easier if you give a full log.

C:\Users\gabic\Downloads\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.856
Running on local URL: http://127.0.0.1:7865

C:\Users\gabic\Downloads\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.856
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 8017 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 Ti : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE

C:\Users\gabic\Downloads\Fooocus_win64_2-1-831>pause
Press any key to continue . . .

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 30, 2023

Thank you for the terminal log. Please double check your swap configuration.
Can you please check in task manager if when running Fooocus your RAM is full and if your drive then is taking over (active write/read speeds)?
After you checked that everything is set correctly and if it's still failing after a reboot of your computer, please add the argument --always-low-vram to your run.bat, save and execute it once again. This should then switch from normal vram mode to low vram mode and you might be able to run Fooocus in a (more) stable way.
Looking forward to your feedback!

@mashb1t mashb1t added bug Something isn't working question Further information is requested labels Dec 30, 2023
@Skratchyy
Copy link
Author

Is there a possibility I can make it work on 8 gb RAM? i have gtx 1660ti and 8 Gb of ram I know the requirement is 16 but I was just wondering if it is possible

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 30, 2023

As of https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#minimal-requirement it should work with 8GB RAM. Where did you read the information for 16GB?

@stdNullPtr
Copy link

stdNullPtr commented Dec 30, 2023

Having a similar issue, here is my console log:

H:\Programs\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--directml']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.857
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Using directml with device:
Total VRAM 1024 MB, total RAM 32699 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 3955980722773957099
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] woman, flowing, sharp focus, intricate, cinematic light, clear focused, very coherent, symmetry, perfect, detailed, ambient, sleek, highly integrated, deep aesthetic, elegant, professional, combined, dramatic, vibrant colors, magic, background, novel, color, fine detail, full, dynamic, energetic, beautiful, unique, lovely, majestic, cool, creative, awesome
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] woman, flowing, radiant light, detailed intricate, dramatic, sharp focus, surreal, aesthetic, highly detail, beautiful, elegant, dynamic, rich deep colors, inspired, brave, glowing, colorful, shiny, winning, perfect, vibrant, complex, color, background, illuminated, professional, full, relaxed, extremely creative, appealing, cute, pretty, innocent, amazing
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 16.58 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 64.0
[Fooocus Model Management] Moving model(s) has taken 6.68 seconds
0%| | 0/30 [00:00<?, ?it/s]H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)
3%|██▊ | 1/30 [00:05<02:29, 5.15s/it][W D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\engine\dml_heap_allocator.cc:120] DML allocator out of memory!
[W D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\engine\dml_heap_allocator.cc:120] DML allocator out of memory!
3%|██▊ | 1/30 [00:09<04:21, 9.01s/it]
Traceback (most recent call last):
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 806, in worker
handler(task)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 737, in handler
imgs = pipeline.process_diffusion(
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 361, in process_diffusion
sampled_latent = core.ksampler(
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 313, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 101, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 716, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 157, in sample_hacked
samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 561, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\k_diffusion\sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 314, in patched_KSamplerX0Inpaint_forward
out = self.inner_model(x, sigma,
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 275, in forward
return self.apply_model(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 272, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 229, in patched_sampling_function
positive_x0, negative_x0 = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 226, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 431, in patched_unet_forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 46, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\attention.py", line 604, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\attention.py", line 431, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint
return func(*inputs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\attention.py", line 531, in _forward
n = self.attn2(n, context=context_attn2, value=value_attn2)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\attention.py", line 375, in forward
k = self.to_k(context)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\ops.py", line 26, in forward
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\ops.py", line 21, in forward_ldm_patched_cast_weights
weight, bias = cast_bias_weight(self, input)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\ops.py", line 10, in cast_bias_weight
weight = s.weight.to(device=input.device, dtype=input.dtype, non_blocking=non_blocking)
RuntimeError: The GPU will not respond to more commands, most likely because some other application submitted invalid commands.
The calling application should re-create the device and continue.
Total time: 32.44 seconds
Keyboard interruption in main thread... closing server.
Terminate batch job (Y/N)?

As soon as I hit Generate, my 32GB ram starts to fill up, when it gets to ~31gb filled, VRAM starts to fill up and when it gets to ~11/16 GB it crashes with the above error.

GPU is RX 7800 XT
CPU is ryzen 7 3700x

all drivers updated
system is properly cooled all temps are good at all times
nothing except the browser is running in the background
windows 10

@stdNullPtr
Copy link

stdNullPtr commented Dec 30, 2023

I figured out my issue:

So last week I was with 16GB of memory with default swap settings which introduced a ~15gb page file.

Now I am with 32GB and this started happening, however, the pagefile is still ~15GB.
I raised the page file size manually to 20GB and the issue disappeared and I am able to generate once again. Not sure what the exact issue was.

@mashb1t mashb1t closed this as completed Jan 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants