-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory error on PC #780
Comments
show full log? |
where should i look for the log? |
This is the output in the console: .\python_embeded\python.exe -s Fooocus\entry_with_update.py To create a public link, set |
hello this should be caused by system RAM too old and does not support transformer accelerator's hooks. |
okay. I may do that. Is there no available workaround for the RAM issue? |
In my test #792, starting a system with 16GB or less of CPU memory may be a problem. |
Hi all, fooocus can run in 12GB RAM in linux without swap. see also the colab demo (linux, 12GB RAM, no swap, T4 GPU) in Readme. But this may also related to many complicated factors like CPU arch, etc |
Hello. I just installed Fooocus, let it download the SDXL models, and did my first test run. It failed to complete the run with the message:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 3.55 GiB is free. Of the allocated memory 1.36 GiB is allocated by PyTorch, and 77.12 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm running on a Desktop with 8GB RAM and a NVIDIA GeForce RTX 2060 with 6GB VRAM. I was able to get SDXL to work before on Automatic1111, but the model was a slightly different version.
The text was updated successfully, but these errors were encountered: