-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: mat1 and mat2 must have the same dtype #411
Comments
I am also having this exact same issue. It happens even if not using any advanced options. Tried several times and every single one of them I got this error. It also happens only when starting the refiner. |
i am having this issue as well |
I tried to make a variation with input image and got stuck at 40 samples with this error couple of times |
Same with me |
same, what should I do? |
I have the same problem. |
Hi - I also get the same message. When I run Foocus, about 40% of the image is generated then I get the mat1 and mat2 error message. |
Had this issue too, so I monitored the console to see when it happens. It's immediately after it switches to CPU and RAM to finish the generation that it crashes. Idk, I'm not sure what the ins and outs are in dealing with SDxl, but I do know it has a CPU mode, so there must be some kind of carry over error in data shape or something, hehe. I'm such a noob, LOL. |
I have the same problem. |
return torch.nn.functional.linear(input, self.weight, self.bias) this is where it stops and hits me with the runtimeerror above , Step 20/30 in the 1-th Sampling (roughly 63% in my cmd.exe) i have no idea how to fix this - pretty new to all of this as well so if someone could explain in laymens terms would appreciate it. |
@Delvador13 same thing happens to me. Mine gets to 20/30 and then crashes with the same error. Unfortunately the maintainers of this package said they won't be back till possibly mid October. Let's see if another community member is able to figure the problem out - it's definitely way over my head 😅 |
Okay, since it's OpenSource, I used ChatGPT to go through all the modules and we checked all the handling of both VRAM and Mat1 and Mat2. Nothing - only the theories that it might not always transfer correctly on the ifs or that the refiner might be handled differently for VRAM, as opposed to GPU. My reason for suspecting this is that it runs well until it switches to the refiner, then it hits an error on the first step: [Virtual Memory System] time = 1.29736s: refiner_clip released from cpu: C:\Fooocus\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors This is as far as my limited python ML knowledge goes, but I certainly hope it helps to find the problem. :D As a final note, I did make sure I have the 40GB free (70+ actually) and that my VRAM is handled in the recommended way. |
I thought I would send my command prompt log in case it helps. I note that it looks like "nice" pictures are always in the process of being generated (but I never get to see a finished one) so I am keen to get Foocus working!! C:\Fooocus_win64_2-0-50>.\python_embeded\python.exe -s Fooocus\entry_with_update.py To create a public link, set |
BREAKTHROUGH!!! The problem is definitely the passthrough to the refiner. All you have to do is set the refiner in advanced settings to none, then it completes! It will allow you to generate in the meantime, until this bug is fixed. :D WHOOHOO! :D |
@MariusOberholster setting the refiner to none worked for me as well! Thanks for the tip! |
@tbell511 You're very welcome!! I'm so glad we don't have to wait until mid-Oct! haha |
@ryanev644 That's so weird... I have less (16GB), so the amount of ram you have is not the issue. The minimum requirement for GPU is 4GB, so if you have an older one, it might be that there is an issue with the specific CUDA version, since that's where the error happens - see if you can find in the download what version fooocus and SDxl requires. If there's a driver update, perhaps try that. I know that's a very generic answer for this kind of thing, but it would make sense too, since it references an API call issue. |
@MariusOberholster Alright so I looked on the main page and got the nvidia driver that was recommended. After everything installed, I can now finish generating an image, Thanks for the help. Another cuda issue comes up though when I try to upscale, but I'll poke around. |
@ryanev644 : GREAT STUFF! Glad that resolved at least part of the issue. Hope you find the solution to the final hurdle! :D |
@lllyasviel: THANK YOU SO MUCH!!! It's working like a charm! |
67%|██████████████████████████████████████████████████████▋ | 20/30 [03:39<01:46, 10.69s/it][Virtual Memory System] time = 2.24801s: model released from cpu: D:\AI\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
[Virtual Memory System] time = 0.40300s: model loaded to cpu: D:\AI\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors
Refiner swapped.
70%|█████████████████████████████████████████████████████████▍ | 21/30 [03:55<01:50, 12.25s/i 70%|████████████████████████████ █████████████████████████████▍ | 21/30 [03:55<01:41, 11.24s/it]
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in bootstrap_inner
File "threading.py", line 953, in run
File "D:\AI\Fooocus\modules\async_worker.py", line 312, in worker
handler(task)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\async_worker.py", line 268, in handler
imgs = pipeline.process_diffusion(
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\default_pipeline.py", line 227, in process_diffusion
sampled_latent = core.ksampler_with_refiner(
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\modules\core.py", line 281, in ksampler_with_refiner
samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy,
File "D:\AI\Fooocus\modules\samplers_advanced.py", line 239, in sample
samples = getattr(k_diffusion_sampling, "sample{}".format(self.sampler))(self.model_k, noise, sigmas,
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 644, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "D:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\modules\patch.py", line 44, in patched_discrete_eps_ddpm_denoiser_forward
return self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 263, in calc_cond_uncond_batch
output = model_function(input_x, timestep, **c).chunk(batch_chunks)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_base.py", line 61, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 611, in forward
emb = self.time_embed(t_emb)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "D:\AI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\AI\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\ops.py", line 18, in forward
return torch.nn.functional.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype
The text was updated successfully, but these errors were encountered: