You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[x] I admit that I have read the Troubleshoot before making this issue.
Describe the problem
I have 48GB swap but still got this error.
I have tried using --always-cpu flag and I haven't got this error. I only get the error when I use GPU (AMD 5700XT 8GB) without chaning swap.
Full Console Log
1. Using command
(fooocus_env) artix:[artix]:~/Applications/fooocus$ python entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--always-gpu']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True in launch().
Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded. Segmentation fault
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True in launch().
Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM /home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: /home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.85 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
Enter LCM mode.
[Fooocus] Downloading LCM components ...
[Parameters] Adaptive CFG = 1.0
[Parameters] Sharpness = 0.0
[Parameters] ADM Scale = 1.0 : 1.0 : 0.0
[Parameters] CFG = 1.0
[Parameters] Seed = 1708197939703660366
[Parameters] Sampler = lcm - lcm
[Parameters] Steps = 8 - 8
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('sdxl_lcm_lora.safetensors', 1.0)] for model [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sdxl_lcm_lora.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 1.0.
Requested to load SDXLClipModel
Loading 1 new model
unload clone 1
[Fooocus Model Management] Moving model(s) has taken 0.76 seconds
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ... Segmentation fault
So I got a one warning and a segmentation fault. /home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML") Segmentation fault
2. Using command:
(fooocus_env) artix:[artix]:~/Applications/fooocus$ python entry_with_update.py --always-gpu
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--always-gpu']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True in launch().
Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded. Segmentation fault
So using --always-gpu I also get a warning:
-warnings.warn("Can't initialize NVML")
and "Segmentation fault".
The text was updated successfully, but these errors were encountered:
Thank you for the well structured report.
Can you please try running Fooocus without any flags on your machine, so without --always-gpu and check if this still occurs? This would be appreciated.
Read Troubleshoot
[x] I admit that I have read the Troubleshoot before making this issue.
Describe the problem
I have 48GB swap but still got this error.
I have tried using --always-cpu flag and I haven't got this error. I only get the error when I use GPU (AMD 5700XT 8GB) without chaning swap.
Full Console Log
1. Using command
(fooocus_env) artix:[artix]:~/Applications/fooocus$ python entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--always-gpu']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Segmentation fault
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: /home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.85 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
Enter LCM mode.
[Fooocus] Downloading LCM components ...
[Parameters] Adaptive CFG = 1.0
[Parameters] Sharpness = 0.0
[Parameters] ADM Scale = 1.0 : 1.0 : 0.0
[Parameters] CFG = 1.0
[Parameters] Seed = 1708197939703660366
[Parameters] Sampler = lcm - lcm
[Parameters] Steps = 8 - 8
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('sdxl_lcm_lora.safetensors', 1.0)] for model [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Loaded LoRA [/home/artix/Applications/fooocus/models/loras/sdxl_lcm_lora.safetensors] for UNet [/home/artix/Applications/fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 1.0.
Requested to load SDXLClipModel
Loading 1 new model
unload clone 1
[Fooocus Model Management] Moving model(s) has taken 0.76 seconds
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
Segmentation fault
So I got a one warning and a segmentation fault.
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Segmentation fault
2. Using command:
(fooocus_env) artix:[artix]:~/Applications/fooocus$ python entry_with_update.py --always-gpu
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--always-gpu']
Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801]
Fooocus version: 2.1.859
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.Total VRAM 8176 MB, total RAM 15913 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Device: cuda:0 AMD Radeon RX 5700 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Segmentation fault
So using --always-gpu I also get a warning:
-warnings.warn("Can't initialize NVML")
and "Segmentation fault".
The text was updated successfully, but these errors were encountered: