Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: duplicate vulkan devices being detected on windows #9516

Closed
tempstudio opened this issue Sep 17, 2024 · 1 comment
Closed

Bug: duplicate vulkan devices being detected on windows #9516

tempstudio opened this issue Sep 17, 2024 · 1 comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale

Comments

@tempstudio
Copy link

tempstudio commented Sep 17, 2024

What happened?

When running llama-cli.exe for vulkan on Windows, it detects the same graphics card twice, which means it does not work out of the box.
by default it will try to use "both" which then leads to the cryptic error

MESA: error: == VALIDATION ERROR =============================================
error: Total Thread Group Shared Memory storage is 33792, exceeded 32768.
Validation failed.

Setting

$env:GGML_VK_VISIBLE_DEVICES=0

is necessary and works, but that's not really documented anywhere.

Interestingly, the other device (vulkan on d3d12?) doesn't work no matter what. Setting

$env:GGML_VK_VISIBLE_DEVICES=1

Will lead to the same error.

Name and Version

./llama-cli.exe --version
version: 3772 (23e0d70)
built with MSVC 19.29.30154.0 for x64

What operating system are you seeing the problem on?

Windows

Relevant log output

ggml_vulkan: Found 2 Vulkan devices:
Vulkan0: AMD Radeon RX 6800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64
Vulkan1: Microsoft Direct3D12 (AMD Radeon RX 6800 XT) (Dozen) | uma: 0 | fp16: 1 | warp size: 32
@tempstudio tempstudio added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Sep 17, 2024
@github-actions github-actions bot added the stale label Oct 22, 2024
Copy link
Contributor

github-actions bot commented Nov 8, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale
Projects
None yet
Development

No branches or pull requests

1 participant