Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: mat1 and mat2 must have the same dtype #411

Closed
Ainmymind opened this issue Sep 18, 2023 · 24 comments
Closed

RuntimeError: mat1 and mat2 must have the same dtype #411

Ainmymind opened this issue Sep 18, 2023 · 24 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@Ainmymind
Copy link

67%|██████████████████████████████████████████████████████▋ | 20/30 [03:39<01:46, 10.69s/it][Virtual Memory System] time = 2.24801s: model released from cpu: D:\AI\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.40300s: model loaded to cpu: D:\AI\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
Refiner swapped.
70%|█████████████████████████████████████████████████████████▍ | 21/30 [03:55<01:50, 12.25s/i 70%|████████████████████████████ █████████████████████████████▍ | 21/30 [03:55<01:41, 11.24s/it]
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in bootstrap_inner
File "threading.py", line 953, in run
File "D:\AI\Fooocus\modules\async_worker.py", line 312, in worker
handler(task)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\async_worker.py", line 268, in handler
imgs = pipeline.process_diffusion(
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\default_pipeline.py", line 227, in process_diffusion
sampled_latent = core.ksampler_with_refiner(
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\core.py", line 281, in ksampler_with_refiner
samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy,
File "D:\AI\Fooocus\modules\samplers_advanced.py", line 239, in sample
samples = getattr(k_diffusion_sampling, "sample
{}".format(self.sampler))(self.model_k, noise, sigmas,
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 644, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\modules\patch.py", line 44, in patched_discrete_eps_ddpm_denoiser_forward
return self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 263, in calc_cond_uncond_batch
output = model_function(input_x, timestep
, **c).chunk(batch_chunks)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_base.py", line 61, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 611, in forward
emb = self.time_embed(t_emb)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ops.py", line 18, in forward
return torch.nn.functional.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype

@ClaudioNJunior
Copy link

I am also having this exact same issue. It happens even if not using any advanced options. Tried several times and every single one of them I got this error.

It also happens only when starting the refiner.

@Delvador13
Copy link

i am having this issue as well

@Weyero
Copy link

Weyero commented Sep 19, 2023

I tried to make a variation with input image and got stuck at 40 samples with this error couple of times

@lllyasviel lllyasviel added bug Something isn't working help wanted Extra attention is needed labels Sep 19, 2023
@ryanev644
Copy link

Same with me

@KrakenJet
Copy link

same, what should I do?

@Ainmymind
Copy link
Author

Before my memory is 16g, later increased to 32g can be, I hope you can also succeed.
dc3092bc8bc41b6551fb7ed59662239

@ryanev644
Copy link

Before my memory is 16g, later increased to 32g can be, I hope you can also succeed. dc3092bc8bc41b6551fb7ed59662239

Did you increase your ram memory or something else?

@behnamazizi
Copy link

I have the same problem.

@ajarmstron
Copy link

Hi - I also get the same message. When I run Foocus, about 40% of the image is generated then I get the mat1 and mat2 error message.
I would appreciate it if someone could explain what this error is and if there is anything I need to do to fix this (or if it is the software itself).
Thank you!

@MariusOberholster
Copy link

Had this issue too, so I monitored the console to see when it happens. It's immediately after it switches to CPU and RAM to finish the generation that it crashes. Idk, I'm not sure what the ins and outs are in dealing with SDxl, but I do know it has a CPU mode, so there must be some kind of carry over error in data shape or something, hehe. I'm such a noob, LOL.

@tbell511
Copy link

I have the same problem.

@Delvador13
Copy link

return torch.nn.functional.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype

this is where it stops and hits me with the runtimeerror above , Step 20/30 in the 1-th Sampling (roughly 63% in my cmd.exe)

i have no idea how to fix this - pretty new to all of this as well so if someone could explain in laymens terms would appreciate it.

@tbell511
Copy link

@Delvador13 same thing happens to me. Mine gets to 20/30 and then crashes with the same error.

Unfortunately the maintainers of this package said they won't be back till possibly mid October.

Let's see if another community member is able to figure the problem out - it's definitely way over my head 😅

@MariusOberholster
Copy link

MariusOberholster commented Sep 23, 2023

Okay, since it's OpenSource, I used ChatGPT to go through all the modules and we checked all the handling of both VRAM and Mat1 and Mat2. Nothing - only the theories that it might not always transfer correctly on the ifs or that the refiner might be handled differently for VRAM, as opposed to GPU.

My reason for suspecting this is that it runs well until it switches to the refiner, then it hits an error on the first step:

[Virtual Memory System] time = 1.29736s: refiner_clip released from cpu: C:\Fooocus\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
[ADM] Negative ADM = True
0%| | 0/30 [00:00<?, ?it/s]C:\Fooocus\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
67%|██████████████████████████████████████████████████████▋ | 20/30 [05:00<01:58, 11.85s/it][Virtual Memory System] time = 2.68890s: model released from cpu: C:\Fooocus\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.64107s: model loaded to cpu: C:\Fooocus\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
Refiner swapped.
70%|█████████████████████████████████████████████████████████▍ | 21/30 [05:21<02:17, 15.30s/it]
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run

This is as far as my limited python ML knowledge goes, but I certainly hope it helps to find the problem. :D

As a final note, I did make sure I have the 40GB free (70+ actually) and that my VRAM is handled in the recommended way.

@ajarmstron
Copy link

ajarmstron commented Sep 23, 2023

Hi - I also get the same message. When I run Foocus, about 40% of the image is generated then I get the mat1 and mat2 error message. I would appreciate it if someone could explain what this error is and if there is anything I need to do to fix this (or if it is the software itself). Thank you!

I thought I would send my command prompt log in case it helps. I note that it looks like "nice" pictures are always in the process of being generated (but I never get to see a finished one) so I am keen to get Foocus working!!

C:\Fooocus_win64_2-0-50>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.0.78
Inference Engine exists.
Inference Engine checkout finished.
Total VRAM 6144 MB, total RAM 24177 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 Ti : native
Using xformers cross attention
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Fooocus Text Processing Pipelines are retargeted to cuda:0
[Virtual Memory System] Logic target is CPU, memory = 24177.25
[Virtual Memory System] Activated = True
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.58080s: model released from cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.28738s: refiner_clip released from cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded for cuda:0.
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: intricate, elegant, highly detailed, digital painting, artstation, concept art, matte, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] New suffix: extremely hyper detailed, character concept art, smooth, sharp focus trending on artstation, award winning art, masterpiece
[Fooocus] Encoding base positive #1 ...
[Fooocus] Encoding base positive #2 ...
[Fooocus] Encoding base negative #1 ...
[Fooocus] Encoding base negative #2 ...
[Virtual Memory System] time = 0.24545s: refiner_clip loaded to cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
[Fooocus] Encoding refiner positive #1 ...
[Fooocus] Encoding refiner positive #2 ...
[Fooocus] Encoding refiner negative #1 ...
[Fooocus] Encoding refiner negative #2 ...
[Virtual Memory System] time = 0.69874s: refiner_clip released from cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
[ADM] Negative ADM = True
0%| | 0/30 [00:00<?, ?it/s]C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
67%|██████████████████████████████████████████████████████▋ | 20/30 [02:01<00:56, 5.67s/it][Virtual Memory System] time = 2.38361s: model released from cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.63154s: model loaded to cpu: C:\Fooocus_win64_2-0-50\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
Refiner swapped.
70%|█████████████████████████████████████████████████████████▍ | 21/30 [02:17<00:58, 6.54s/it]
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in bootstrap_inner
File "threading.py", line 953, in run
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\async_worker.py", line 391, in worker
handler(task)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\async_worker.py", line 341, in handler
imgs = pipeline.process_diffusion(
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\default_pipeline.py", line 238, in process_diffusion
sampled_latent = core.ksampler_with_refiner(
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\core.py", line 312, in ksampler_with_refiner
samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy,
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\samplers_advanced.py", line 437, in sample
samples = getattr(k_diffusion_sampling, "sample
{}".format(self.sampler))(self.model_k, noise, sigmas,
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\patch.py", line 275, in sample_dpmpp_fooocus_2m_sde_inpaint_seamless
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\patch.py", line 161, in patched_discrete_eps_ddpm_denoiser_forward
return self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 261, in calc_cond_uncond_batch
output = model_options['model_function_wrapper'](model_function, {"input": input_x, "timestep": timestep
, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\patch.py", line 170, in patched_model_function
return func(x, t, **c)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_base.py", line 61, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\modules\patch.py", line 317, in patched_unet_forward
emb = self.time_embed(t_emb)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "C:\Fooocus_win64_2-0-50\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ops.py", line 18, in forward
return torch.nn.functional.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype

@MariusOberholster
Copy link

BREAKTHROUGH!!!

The problem is definitely the passthrough to the refiner. All you have to do is set the refiner in advanced settings to none, then it completes! It will allow you to generate in the meantime, until this bug is fixed. :D

WHOOHOO! :D

@tbell511
Copy link

@MariusOberholster setting the refiner to none worked for me as well! Thanks for the tip!

@MariusOberholster
Copy link

@MariusOberholster setting the refiner to none worked for me as well! Thanks for the tip!

@tbell511 You're very welcome!! I'm so glad we don't have to wait until mid-Oct! haha

@Ainmymind
Copy link
Author

Before my memory is 16g, later increased to 32g can be, I hope you can also succeed. dc3092bc8bc41b6551fb7ed59662239

Did you increase your ram memory or something else?

yes,i increase ram 32G

@ryanev644
Copy link

After disabling the refiner on my end, the image finished generating in cmd, but immediately leaves a new error.
image
I might just need more ram. I have 20gb.

@MariusOberholster
Copy link

@ryanev644 That's so weird... I have less (16GB), so the amount of ram you have is not the issue. The minimum requirement for GPU is 4GB, so if you have an older one, it might be that there is an issue with the specific CUDA version, since that's where the error happens - see if you can find in the download what version fooocus and SDxl requires. If there's a driver update, perhaps try that. I know that's a very generic answer for this kind of thing, but it would make sense too, since it references an API call issue.

@ryanev644
Copy link

@MariusOberholster Alright so I looked on the main page and got the nvidia driver that was recommended. After everything installed, I can now finish generating an image, Thanks for the help. Another cuda issue comes up though when I try to upscale, but I'll poke around.

@MariusOberholster
Copy link

@ryanev644 : GREAT STUFF! Glad that resolved at least part of the issue. Hope you find the solution to the final hurdle! :D

@MariusOberholster
Copy link

@lllyasviel: THANK YOU SO MUCH!!! It's working like a charm!

@mashb1t mashb1t closed this as completed Jan 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests