Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PEFT LoRA not working with xformers. #5504

Closed
AnyISalIn opened this issue Oct 24, 2023 · 2 comments · Fixed by #5697
Closed

PEFT LoRA not working with xformers. #5504

AnyISalIn opened this issue Oct 24, 2023 · 2 comments · Fixed by #5697
Labels
bug Something isn't working

Comments

@AnyISalIn
Copy link
Contributor

Describe the bug

PEFT LoRA not working with xformers.

Reproduction

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("./models/checkpoint/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
res = pipe(prompt="1girl", num_inference_steps=20)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

TypeError: Linear.forward() got an unexpected keyword argument 'scale'

Logs

No response

System Info

- `diffusers` version: 0.22.0.dev0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Huggingface_hub version: 0.17.3
- Transformers version: 4.34.0
- Accelerate version: 0.23.0
- xFormers version: 0.0.22.post4
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>

Who can help?

No response

@AnyISalIn AnyISalIn added the bug Something isn't working label Oct 24, 2023
@AnyISalIn
Copy link
Contributor Author

I think maybe can add this code to fix this issue.

class XFormersAttnProcessor:
        # ...
        args = () if USE_PEFT_BACKEND else (scale,)
       # ...
        query = attn.to_q(hidden_states, *args)

@AnyISalIn
Copy link
Contributor Author

I have create a pull request to fix this problems. @younesbelkada @sayakpaul

#5506

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant