Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent BatchedBrownianTree crash on startup on AMD #1265

Closed
wants to merge 1 commit into from

Conversation

Thobro
Copy link

@Thobro Thobro commented Dec 7, 2023

This solves a common issue reported in #1263, #1185, #1091 and others. A solution is to enable cpu_tree within the class BatchedBrownianTree for AMD products (or at least privateuseone devices, as I'm not sure what kind of other hardware is also of type privateuseone).

@stainz2004
Copy link

Ima be honest i can not find where and how i can enable the cpu_tree. Can anyone tell me in which folder/file it is in so i could maybe get fooocus to work finally.
Big thanks

@JAL-code
Copy link

JAL-code commented Dec 8, 2023

Still getting error even though change is made. Path is .\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py. Line 68.

self.cpu_tree = True

Class referenced at: .\Fooocus\modules\patch.py See Line 162.

@Thobro
Copy link
Author

Thobro commented Dec 11, 2023

Still getting error even though change is made. Path is .\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py. Line 68.

self.cpu_tree = True

Class referenced at: .\Fooocus\modules\patch.py See Line 162.

cpu_tree is probably being set to False at line 70. It should work if you apply my fix. Thus adding:
if w0.device.type == "privateuseone":
self.cpu_tree = True
just below line 72.

You might still run into memory issues, but that's a separate problem.

@lllyasviel
Copy link
Owner

this should be already fixed

@lllyasviel lllyasviel closed this Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants