-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image not generating on AMD windows #1078
Comments
The exact same thing happened to me |
check #624 |
This #624 (comment) solved my issue but now getting this error
|
New log
|
Same for me, RX 6800, Windows 11 |
memory allocation issue here as well. RTX 4060 Laptop, 8GB |
I have also tried editing model_management.py in .\Fooocus\backend\headless\fcbh, where I changed |
Same issue Here: Windows 11 on a AMD 7900XT with 20GB. But starting only "see" 1024MB |
Hi all, I have the same problem as described in this issue meaning that I am on Windows with AMD GPU (RX 5700 XT), I edited the .bat file as described in readme.md section Windows(AMD GPUs) and applied the fix from #624 (comment). Currently getting "Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!" error after entering a prompt. Please see how my GPU's usage looks like during idle and once the error happens in the below screenshots: Also, please see the full log: To create a public link, set |
same issue. To create a public link, set |
Fixed my issue. I followed the AMD instructions but have an Nvidia GPU |
I hope the developers solve this issue for AMD users else i have to learn python and tensor and all the lib first before solving it myself |
With some simple debugging I managed to pinpoint the issue (my issue, but I believe we all struggle with the same problem) to a loop being done over and over again until GPU runs out of memory. So, def load_models_gpu in model_management.py def model_load in model_management.py def patch_model in model_patcher.py (the loop) I hope that helps the developers. |
I think i found a solution for one of the problems, tough it comes with a downside! Found this in the console:
And since i'm using this startargument it's not stopping randomly anymore on the final stage. So: I know this won't fix the issue of it allocating too much ram, but at least one issue. The downside is, you can't cancel or skip generation anymore. Hope this helped some people, let's keep trying to find a complete solution! EDIT: |
@GroupXyz1 What is the name of the file that i need to edit to input |
@strobya The file you start Fooocus with, so the run.bat or run_anime.bat or run_realistic.bat.
|
@GroupXyz1 Alrigth, did that and go this error
|
You have to put a space in between the arguments. But if RAM is your issue, this doesn't help |
@Meganton It does indeed help with RAM issues, but only during the attention-phase (i have no name for this so i call it like the argument) which is only short before the image is ready, so it will help if your image is almost done and then the error with ram is coming, not if it happens already during generation (between step 1-29 ig) |
I have tried that yet it still says there isn't enough space for 10 MB. |
@RAICircles yeah it only fixes if it happens at the end of generation, so for me it worked restarting it every single time after one generated image, and after some time it juste worked fine, but if it doesnt work at all you will have to wait for a fix of the script from the creator, or someone else who has knowledge with that. I also suspect that it differs when you use normal or anime or realistic, so you might try another one of them if it works! |
I've tried everything here, but all im getting is the RuntimeError: Could not allocate tensor with 192397120 bytes. There is not enough GPU video memory available! |
yeah i have no idea either on how to fix that broken code, |
Did not work for me. VRAM issue remains for me. 6700 XT |
@jamesbychance you are installing torch and not running the programm, you need to use the argument while running the cmd like this: .\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml --use-split-cross-attention what you are trying is installing torch with that argument and that won't work because it's something different than starting fooocus, you maybe accidentally copied the wrong line. |
AMD 6700XT My
I still get this error:
|
@Crunch91 --use-split-cross-attention has been changed(?) to --attention-split |
At least I have now the same error as the others already mentioned 😄
|
Good news everybody: As of 8e62a72 (latest Fooocus version 2.1.857) AMD with >= 8GB VRAM is now supported. |
@mashb1t Deleted old files and redownloaded .\python_embeded\Lib\site-packages\torchsde_brownian\brownian_interval.py Line 32 generator = torch.Generator(device).manual_seed(int(seed)) to generator = torch.Generator().manual_seed(int(seed)) then tried to alloctae more VRAM Go to \Fooocus\backend\headless\fcbh\model_management.py In line 95 change mem_total = 1024 * 1024 * 1024 to mem_total = 8192 * 1024 * 1024. Now Im getting this error
|
@strobya if you still have backend\headless\fcbh, you're not using the latest version of Fooocus, where fcbh has been replaced by ldm_patched. |
for me the latest versions works. 8 GB VRAM 32 GB RAM and a RX5700 |
@mashb1t my bad that was a typo, it shouldve been: \Fooocus\ldm_patched\modules In line 95 change mem_total = 1024 * 1024 * 1024 to mem_total = 8192 * 1024 * 1024. Now, its working but I get an error halfway: "Error "C:\Users\mypc3\Downloads\Compressed\Ai>pause I tried the toublshoot option provided by @lllyasviel (System swap) and Im still getting the same error mention above |
@mashb1t @lllyasviel Nevermind, I restarted my pc and it worked. Thanks guys 👊😎 |
Describe the problem
After adding the modal in the checkpoint folder and replacing the content in bat file with
Cause i have a AMD GPU the app started successfully and but after writing the prompt i starts the loading but no image, below is the console please let me know what i am doing wrong
Full Console Log
The text was updated successfully, but these errors were encountered: