Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Can't use IPEX with Token Merging #14434

Closed
3 of 6 tasks
zakusworo opened this issue Dec 26, 2023 · 3 comments
Closed
3 of 6 tasks

[Bug]: Can't use IPEX with Token Merging #14434

zakusworo opened this issue Dec 26, 2023 · 3 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@zakusworo
Copy link

zakusworo commented Dec 26, 2023

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

when using --use-ipex and activate token merging ratio, the generation fails with 'AttributeError: 'str' object has no attribute 'type'

Steps to reproduce the problem

  1. click webui-user.bat (with --use-ipex set in Commandline Args)
  2. SD Webui is loaded
  3. Go to Settings -> Optimizations -> Set Token Merging Ratio
  4. Go to txt2img
  5. Write prompts and click generate
  6. generate failed with 'AttributeError: 'str' object has no attribute 'type'

What should have happened?

in SD Next, IPEX and Token Merging is successfully excecuted

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2023-12-26-10-37.json

Console logs

*** Error completing request
*** Arguments: ('task(g55eh4oo1y1hfg6)', '1girl', '', [], 8, 'LCM', 1, 1, 1, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000028E38F85F60>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000028E38F86AA0>, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\processing.py", line 747, in process_images
        res = process_images_inner(p)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\processing.py", line 881, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\processing.py", line 1180, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_lcm.py", line 86, in sample_lcm
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 185, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_lcm.py", line 76, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_lcm.py", line 61, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\tomesd\patch.py", line 59, in _forward
        m_a, m_c, m_m, u_a, u_c, u_m = compute_merge(x, self._tome_info)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\tomesd\patch.py", line 24, in compute_merge
        args["generator"] = init_generator(x.device)
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\tomesd\utils.py", line 29, in init_generator
        return init_generator(torch.device("cpu"))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\venv\lib\site-packages\tomesd\utils.py", line 24, in init_generator
        return torch.Generator(device="cpu").set_state(torch.get_rng_state())
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 25, in __call__
        if not self.__cond_func or self.__cond_func(self.__orig_func, *args, **kwargs):
      File "C:\Users\great\OneDrive\Desktop\stable-diffusion-webui\modules\xpu_specific.py", line 34, in <lambda>
        lambda orig_func, device=None: device is not None and device.type == "xpu")
    AttributeError: 'str' object has no attribute 'type'

---

Additional information

Screenshot 2023-12-26 174349

@zakusworo zakusworo added the bug-report Report of a bug, yet to be confirmed label Dec 26, 2023
@Nuullll Nuullll mentioned this issue Jan 6, 2024
4 tasks
@Nuullll
Copy link
Contributor

Nuullll commented Jan 6, 2024

#14562

@zakusworo
Copy link
Author

zakusworo commented Jan 6, 2024

#14562

nice.
So, that commit is also fix IPEX 2.1 with "generate" error?

@Nuullll
Copy link
Contributor

Nuullll commented Jan 6, 2024

#14562

nice. So, that commit is also fix IPEX 2.1 with "generate" error?

yes

@akx akx closed this as completed Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

3 participants