-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Device type privateuseone is not supported for torch.Generator() api. #763
Comments
this solved the problem #624 but now I have another: To create a public link, set |
Same GPU, same problem : RuntimeError: Could not allocate tensor with 165150720 bytes. There is not enough GPU video memory available! Any solution available? |
Quick update |
Hit the same. 7900xtx GPU. Ran it with altered bat-file: |
I am having the same issue with 6800XT with latest drivers and Win11. Sadly when you google the error you find this thread and also this microsoft/DirectML#374 and someone reported that to Microsoft back in January. But probably good to also let them know the issue still exists. |
I am having the same problem with rx580 . I tried on Windows and linux. |
same issue |
same here |
RX5700 not work |
Guys I think you need to post that to Microsoft since I think they are the guys who must fix this? To this link microsoft/DirectML#374 |
Here's a working fix #624 (comment) |
RX 6750 XT NITRO+ not working |
My computer is Win 11Pro, AMD Ryzen 9 5900X, 64 Gb RAM and AMD 6700xt 12 Gb. Without the fix #624 I have the following error: With the fix, the error changes: I tried replacing device by "cpu" as seen somewhere but still the last error message. full log:
|
same issue |
Yup, same issue, fix with brownian_interval.py helped but problem with memory allocation still occurs. |
this worked for my 7800XTX. Thank you! |
Same issue no enough GPU memory available i have an AMD 6600XT |
Same issue here, after the fix for the device type im getting the vram error as well. |
Same |
Same for me: 6700XT |
Same here on Windows 11; 6650 XT. After applying brownian_interval.py fix above:
|
i have the same problem, what did you fix it? |
Fixed in recent time, works with latest version of Fooocus. Please update. |
At first, sorry for my english.... I love your work, and I have used it a lot in collab, but I can't get it to run on my machine with an AMD 6700xt graphics card. It seems like it doesn't recognize my graphics card correctly. For example, it indicates that VRAM: 1Gb instead of 12 as the card has, for example. Can you think of any solution?
To create a public link, set
share=True
inlaunch()
.Using directml with device:
Total VRAM 1024 MB, total RAM 16310 MB
Set vram state to: NORMAL_VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Fooocus] Disabling smart memory
model_type EPS
adm 2560
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: G:\Fooocus\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
model_type EPS
adm 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: G:\Fooocus\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded for privateuseone:0, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 6.30 seconds
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
G:\Fooocus\python_embeded\lib\site-packages\transformers\generation\utils.py:723: UserWarning: The operator 'aten::repeat_interleave.Tensor' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
input_ids = input_ids.repeat_interleave(expand_size, dim=0)
[Prompt Expansion] New suffix: extremely detailed, fantastic details full face, mouth, trending on artstation, pixiv, cgsociety, hyperdetailed Unreal Engine 4k 8k ultra HD, WLOP
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] New suffix: intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, in the style of cam sykes, wayne barlowe, igor kieryluk
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.26 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
Preparation time: 4.14 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Traceback (most recent call last):
File "G:\Fooocus\Fooocus\modules\async_worker.py", line 585, in worker
handler(task)
File "G:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Fooocus\Fooocus\modules\async_worker.py", line 518, in handler
imgs = pipeline.process_diffusion(
File "G:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Fooocus\Fooocus\modules\default_pipeline.py", line 347, in process_diffusion
modules.patch.globalBrownianTreeNoiseSampler = BrownianTreeNoiseSampler(
File "G:\Fooocus\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py", line 119, in init
self.tree = BatchedBrownianTree(x, t0, t1, seed, cpu=cpu)
File "G:\Fooocus\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py", line 85, in init
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
File "G:\Fooocus\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py", line 85, in
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
File "G:\Fooocus\python_embeded\lib\site-packages\torchsde_brownian\derived.py", line 155, in init
self._interval = brownian_interval.BrownianInterval(t0=t0,
File "G:\Fooocus\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 540, in init
W = self._randn(initial_W_seed) * math.sqrt(t1 - t0)
File "G:\Fooocus\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 234, in _randn
return _randn(size, self._top._dtype, self._top._device, seed)
File "G:\Fooocus\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 32, in _randn
generator = torch.Generator(device).manual_seed(int(seed))
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
Total time: 67.99 seconds
The text was updated successfully, but these errors were encountered: