-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AMD Windows Run Error #1263
Comments
Have a look here #763 |
from following a few of these threads, I saw a hint to pass the --lowvram command to start the bat file. When I did, my CPU (AMD 7950) spiked to max and my 32GB of ram spiked. The computer was unresponsive for ~72 seconds and then gave the following error: Can anyone advise on what I need to do or try next? My GPU is a Radeon 6650 XT. It had no usage during the execution of the prompt. Thank you in advance |
I haven't checked the source code yet, but I guess the maximum thread should be used to load the model at startup. I don't know why it loads so quickly. This can easily cause the computer to crash. |
I reverted the memory back 1024 * 1024 * 1024. The Memory no longer nearly crashes my PC. in model_patcher.py I changed line 191 to match the else; but I still get an exception where this code is not allocating memory from the GPU: File "D:\Fooocus_win64\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert I also noticed that as soon as the prompt clears the RAM (which still spikes to using all 32GB), the program allocates all remaining GPU memory just before making the claim there is no memory. Some part of the code (sorry, python isn't my language) appears to allocate all available memory but then not use it rather than either using what it has or simply using what it needs. |
Describe the problem
Error
Full Console Log
The text was updated successfully, but these errors were encountered: