Diffusers 0.30.1: Merging LoRA adapters without modifying original model structure #9382
Replies: 2 comments
-
@sayakpaul @yiyixuxu any input on this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
You need to call |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is there a way to fully merge LoRA weights into the base model in newer versions of Diffusers, resulting in an unmodified model graph structure similar to how it worked in version 0.26.3?
Previous behavior (Diffusers 0.26.3):
Current behavior (Diffusers 0.30.1):
Before merging
After merging
I did follow the recommendations in https://huggingface.co/docs/diffusers/en/using-diffusers/merge_loras but observed the same behavior as listed above.
Goal
My use case requires merging LoRA weights into the model in a way that preserves the original model structure.
Reproduce
Script to reproduce the above behavior
Beta Was this translation helpful? Give feedback.
All reactions