Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMD support #3

Closed
Klaster1 opened this issue Aug 12, 2023 · 13 comments
Closed

AMD support #3

Klaster1 opened this issue Aug 12, 2023 · 13 comments

Comments

@Klaster1
Copy link

After starting, Fooocus exists with a "RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx" error.

C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.15
Inference Engine exists.
Inference Engine checkout finished.
Traceback (most recent call last):
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\entry_with_update.py", line 45, in <module>
    from launch import *
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\launch.py", line 81, in <module>
    from webui import *
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\webui.py", line 6, in <module>
    from modules.default_pipeline import process
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\modules\default_pipeline.py", line 1, in <module>
    import modules.core as core
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\modules\core.py", line 8, in <module>
    import comfy.model_management
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_management.py", line 104, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_management.py", line 74, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
    _lazy_init()
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10>pause
Press any key to continue . . .

Running on Windows 11 and AMD Radeon RX 7900 XTX.

@icecore2
Copy link

Same issue with RX 6700 XT.

@johan-lejdung
Copy link

Same on AMD 6750XT

@Small-Ku
Copy link

Related: comfyanonymous/ComfyUI#160

@ahgera
Copy link

ahgera commented Aug 18, 2023

ComfyUI already has support for AMD cards on Windows via Microsoft's DirectML, but it has to be activated through a CLI argument (--directml). Since Fooocus imports Comfy directly I don't see any way of setting the args.directml value to anything from Fooocus' code.

I tried to hack around this in the Comfy repository. First by changing if args.directml is not None: to if not torch.cuda.is_available() and platform.system() == "Windows":, but Fooocus for some reason pulls down the latest Comfy version from git at every start, thereby overwriting any changes.

So I commented out that code, and after a while I got the web UI running, but i was unable to generate any images since I apparently have too little RAM in my old computer (12 GB):
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 84934656 bytes.

The GPU and VRAM was barely used during this, so there was some bug still preventing DirectML to be used I think.

On top of that xFormers complains that it was built for PyTorch with Cuda, and I could find any way of either installing a CPU only version, or disable it.


I got slightly better results trying to use the same SDXL models in ComfyUI, but ultimately I got out of memory error there as well and gave up.
[W D:\a\_work\1\s\pytorch-directml-plugin\torch_directml\csrc\engine\dml_heap_allocator.cc:120] DML allocator out of memory!
At least this time i could see that both the GPU and VRAM was being used, but I'm guessing the limiting factor is still RAM since I got 8 GB VRAM and there were still some free.

I should mention that in both Fooocus' and ComfyUI's Python environments i had installed torch-directml (version 0.2.0.dev230426), which obviously is needed to import it.

@chenshiwei-io
Copy link

chenshiwei-io commented Aug 19, 2023

Same on Amd RX588

@xiao-xiaozi
Copy link

same on Amd 580

@Roninos
Copy link

Roninos commented Nov 23, 2023

RX 5700XT not work

@StarLord-bot
Copy link

AMD Radeon RX 7900 XTX not working here as well

@SkyHype2000
Copy link

SkyHype2000 commented Dec 15, 2023

AMD Radeon RX 6750 XT also does not work here

@lllyasviel
Copy link
Owner

@kenwong1
Copy link

I've been using Fooocus on a 7900 XTX and it seems to work. Just need to make the changes mentioned in #624 (comment)

I would like to see some optimizations done to make it faster on AMD cards though

@SkyHype2000
Copy link

SkyHype2000 commented Dec 16, 2023

thanks it works now ^^

edited: RuntimeError: Could not allocate tensor with 167772160 bytes. There is not enough GPU video memory available!

@kenwong1
Copy link

kenwong1 commented Dec 16, 2023

The minimum requirements that @lllyasviel posted above says 16GB GPU memory required for AMD on Windows with DirectML

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests