Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task doesn't start #2084

Closed
NULL-Term1nat0r opened this issue Jan 28, 2024 · 12 comments
Closed

Task doesn't start #2084

NULL-Term1nat0r opened this issue Jan 28, 2024 · 12 comments
Labels
question Further information is requested

Comments

@NULL-Term1nat0r
Copy link

I installed Fooocus on windows. But when I try to create a picture it forever loads the task. This is the output of the program in the terminal:

D:\Fooocus\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16160 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 Ti with Max-Q Design : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816

Can someone tell me the best way to solve and also if you encountered a similiar problem ?:)
Beside that I really appreciate the effort that went into creating this ai image creator!

@eddyizm
Copy link
Contributor

eddyizm commented Jan 28, 2024

Is this the full log?
I'd start going through the troubleshooting doc and verify your system swap is setup properly as I ran into that issue myself.

https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 29, 2024

@NULL-Term1nat0r please check your disk activity in the performance tab of the task manager. If you see activity this means the swap is used correctly.
Swap speed is dependent on the type of drive you use and will be slow on HDD, but fast on SSD.
Please check and provide feedback, doesn't seem to be an issue of Fooocus though.

@mashb1t mashb1t added the question Further information is requested label Jan 29, 2024
@NULL-Term1nat0r
Copy link
Author

@mashb1t I checked my activity carefully. My Memory was used up 99% and also my HDD used up 100% even though my program is on the SSD. Maybe it's because I installed initially on the HDD and then copied it to my SSD to make it faster. The graphical Unit only goes max to 41%. I have 6 Gigabyte VRAM and 16 GB of Memory. My graphic card is Geforce 1660 Ti. Another thing that my computer gets stuck at some because everything is overflowing

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 29, 2024

@NULL-Term1nat0r Thank you for checking. This confirms it's a swap issue and can be improved.
Please ensure you have at least 30-40GB of free disk space on your SSD.
You can manually change the location of your swap in your system settings by following the tutorial in our troubleshooting guide or in this tutorial: https://www.windowscentral.com/how-move-virtual-memory-different-drive-windows-10

@NULL-Term1nat0r
Copy link
Author

@mashb1t I followed all the steps and put my pagefile.sys onto my SSD. I also checked the performance tab in task manager to make sure its using the memory of my SSD. Still I don't see any output and I am running out of memory (I have 32 GB available my task manager is saying, but I am using all of it). Do I need to add more available memory to make it work since now my SSD is working now on 100% instead of my HDD ?

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 29, 2024

As of your console log you have 16160 MB RAM available, not 32GB. Nevertheless, this should be sufficient to run Fooocus as SDXL takes about 6-8 GB VRAM and for image prompt an additional 2-3 GB is needed, but when VRAM is full RAM is used and when RAM is full the swap is used.
Please close all applications, then start Fooocus to test it in isolation and give it some time (5min) after clicking generate.
Happy to hear your feedback afterwards.

@NULL-Term1nat0r
Copy link
Author

NULL-Term1nat0r commented Jan 31, 2024

Thanks for your help. It is working now for standard creation. But if I wanna use the ImagePrompt feature I always get errors. Here is my log during the action:
C:\Fooocus\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--preset', 'realistic']
Loaded preset: C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\presets\realistic.json
Failed to load config key: {"path_checkpoints": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints"} is invalid or does not exist; will use {"path_checkpoints": "../models/checkpoints/"} instead.
Failed to load config key: {"path_loras": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\loras"} is invalid or does not exist; will use {"path_loras": "../models/loras/"} instead.
Failed to load config key: {"path_embeddings": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\embeddings"} is invalid or does not exist; will use {"path_embeddings": "../models/embeddings/"} instead.
Failed to load config key: {"path_vae_approx": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\vae_approx"} is invalid or does not exist; will use {"path_vae_approx": "../models/vae_approx/"} instead.
Failed to load config key: {"path_upscale_models": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\upscale_models"} is invalid or does not exist; will use {"path_upscale_models": "../models/upscale_models/"} instead.
Failed to load config key: {"path_inpaint": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\inpaint"} is invalid or does not exist; will use {"path_inpaint": "../models/inpaint/"} instead.
Failed to load config key: {"path_controlnet": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\controlnet"} is invalid or does not exist; will use {"path_controlnet": "../models/controlnet/"} instead.
Failed to load config key: {"path_clip_vision": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\clip_vision"} is invalid or does not exist; will use {"path_clip_vision": "../models/clip_vision/"} instead.
Failed to load config key: {"path_fooocus_expansion": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\prompt_expansion\fooocus_expansion"} is invalid or does not exist; will use {"path_fooocus_expansion": "../models/prompt_expansion/fooocus_expansion"} instead.
Failed to load config key: {"path_outputs": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\outputs"} is invalid or does not exist; will use {"path_outputs": "../outputs/"} instead.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16160 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 Ti with Max-Q Design : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v20.safetensors
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v20.safetensors].
Loaded LoRA [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v20.safetensors] with 788 keys at weight 0.25.
Loaded LoRA [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v20.safetensors] with 264 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 16.91 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 6367536241822208672
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
extra clip vision: ['vision_model.embeddings.position_ids']
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] male norwegian man with body of a penguin, age 25, wearing leather jacket, timberland boots, smoking a cigarette, expensive watch on his arm, standing in front of a large crowd, location is in London somewhere in the touristic streets, cinematic light, modern, highly detailed, attractive, intricate, extremely detail, professional, complimentary colors, very beautiful, elegant
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] male norwegian man with body of a penguin, age 25, wearing leather jacket, timberland boots, smoking a cigarette, expensive watch on his arm, standing in front of a large crowd, location is in London somewhere in the touristic streets, highly detailed, intricate, cinematic, light, sharp focus, ambient, glamorous, designed, very romantic, extremely fine detail
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
Requested to load Resampler
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.16 seconds
Requested to load To_KV
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.53 seconds
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1152, 896)
Preparation time: 16.20 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model

C:\Fooocus\Fooocus_win64_2-1-831>pause
Press any key to continue . . .

Do I have to change something in my settings or do you know what the reason could be ?

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 31, 2024

This is a clear indicator that you're running out of RAM, same symptoms as in Colab when using Image Prompt.
In Colab the solution is to shift more load to the GPU but as you only have 6GB VRAM available this is not an option.
Colab fixes for reference: --attention-split --disable-offload-from-vram --always-high-vram

You might have to upgrade your PC (GPU and/or RAM) in order to be able to use Fooocus with all the bells & whistles, but you can still try it with --attention-split.

@NULL-Term1nat0r
Copy link
Author

NULL-Term1nat0r commented Jan 31, 2024

I tried running it with recommended flags but I got the following errors:

C:\Fooocus\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --attention-split --disable-offload-from-vram --always-high-vram
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--attention-split', '--disable-offload-from-vram', '--always-high-vram']
Failed to load config key: {"path_checkpoints": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints"} is invalid or does not exist; will use {"path_checkpoints": "../models/checkpoints/"} instead.
Failed to load config key: {"path_loras": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\loras"} is invalid or does not exist; will use {"path_loras": "../models/loras/"} instead.
Failed to load config key: {"path_embeddings": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\embeddings"} is invalid or does not exist; will use {"path_embeddings": "../models/embeddings/"} instead.
Failed to load config key: {"path_vae_approx": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\vae_approx"} is invalid or does not exist; will use {"path_vae_approx": "../models/vae_approx/"} instead.
Failed to load config key: {"path_upscale_models": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\upscale_models"} is invalid or does not exist; will use {"path_upscale_models": "../models/upscale_models/"} instead.
Failed to load config key: {"path_inpaint": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\inpaint"} is invalid or does not exist; will use {"path_inpaint": "../models/inpaint/"} instead.
Failed to load config key: {"path_controlnet": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\controlnet"} is invalid or does not exist; will use {"path_controlnet": "../models/controlnet/"} instead.
Failed to load config key: {"path_clip_vision": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\clip_vision"} is invalid or does not exist; will use {"path_clip_vision": "../models/clip_vision/"} instead.
Failed to load config key: {"path_fooocus_expansion": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\prompt_expansion\fooocus_expansion"} is invalid or does not exist; will use {"path_fooocus_expansion": "../models/prompt_expansion/fooocus_expansion"} instead.
Failed to load config key: {"path_outputs": "D:\Fooocus\Fooocus_win64_2-1-831\Fooocus\outputs"} is invalid or does not exist; will use {"path_outputs": "../outputs/"} instead.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Running on local URL: http://127.0.0.1:7865
Traceback (most recent call last):
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
response = conn.getresponse()
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connection.py", line 454, in getresponse
httplib_response = super().getresponse()
File "http\client.py", line 1374, in getresponse
File "http\client.py", line 318, in begin
File "http\client.py", line 279, in _read_status
File "socket.py", line 705, in readinto
TimeoutError: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\util\retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\util\util.py", line 39, in reraise
raise value
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connectionpool.py", line 790, in urlopen
response = self._make_request(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connectionpool.py", line 538, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\urllib3\connectionpool.py", line 370, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='127.0.0.1', port=7865): Read timed out. (read timeout=3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\entry_with_update.py", line 46, in
from launch import *
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\launch.py", line 126, in
from webui import *
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\webui.py", line 616, in
shared.gradio_root.launch(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\gradio\blocks.py", line 1968, in launch
and not networking.url_ok(self.local_url)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\gradio\networking.py", line 202, in url_ok
r = requests.head(url, timeout=3, verify=False)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\api.py", line 100, in head
return request("head", url, **kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\requests\adapters.py", line 532, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='127.0.0.1', port=7865): Read timed out. (read timeout=3)

C:\Fooocus\Fooocus_win64_2-1-831>pause

I don't know if it's related to the GRAM or something else

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 31, 2024

Can't tell, urllib and connection pools are too generic.
You can't and shouldn't run all the optimizations for high vram usage, as mentioned this is just for reference. You can try-attention-split only though, but i'd not expect too much from it.

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 5, 2024

Is this still relevant?

@NULL-Term1nat0r
Copy link
Author

Thanks a lot for your help, I save money for a better graphic card now :D

@mashb1t mashb1t closed this as completed Feb 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants