-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert_lora_safetensor_to_diffusers #2829
Comments
The conversion of the LoRA should only be a key remapping right ?
|
I'm not sure this is the case, as the call to the
This outputs a directory much like https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main which is a full model whereas in the |
Sorry I misunderstood @Narsil, you are right it is only a key remapping, |
IIUC we output the full model for generality across different pipelines and also to keep the output artifacts to be as self-contained as possible. If you have some ideas on how we can improve this bit, feel free to open a PR :) |
I was expecting the script to generate a lora and not a whole model. Is it possible to convert the safetensors lora to diffusers lora? Or should we wait for the next release that should support safetensors lora without conversion ? Edit: I found a conversion script there: https://github.com/haofanwang/Lora-for-Diffusers/blob/18adfa4da0afec46679eb567d5a3690fd6a4ce9c/format_convert.py#L154-L161 |
For me this script does not work, it returns a 48 MB bin from a 144 MB safetensor. |
We should try to have an improved |
@patrickvonplaten this would be fantastic! |
Related PR: #2882 |
Updated PR: #2918 |
Regarding the |
Sorry for the late reply @sayakpaul. If I understand the linked comment, that would be for |
The workflow still remains the same. You first download the Then you load it: import safetensors
pt_state_dict = safetensors.torch.load_file(model_filepath, device="cpu") Referencing: diffusers/src/diffusers/loaders.py Line 188 in b811964
Then we can do: torch.save(pt_state_dict, "pt_state_dict.bin") And then it should be just: pipeline.unet.load_attn_procs("pt_state_dict.bin") A couple of other tings that might be worth noting regarding loading Note it's also possible to directly load a We can then do: pipeline.unet.load_attn_procs("username/repo_containing_lora_files_in_safetensors", use_safetensors=True) Explicitly specifying For more details, I welcome you to check out: diffusers/src/diffusers/loaders.py Lines 158 to 193 in b811964
I hope this helps. |
Thanks @sayakpaul for the detailed explanation, I'm running into this issue #3064 I'll follow that one for updates |
Closing this issue then. Feel free to re-open. |
Thanks for @sayakpaul's answer. I found that for recent safetensors I should load by:
because directly use Just post here in case it might be helpful. |
this script
https://github.com/huggingface/diffusers/blob/main/scripts/convert_lora_safetensor_to_diffusers.py
integrates the lora into the Pipeline and then outputs said Pipeline.Is there a method to turn the
.safetensors
Lora to a.bin
file to be loaded in dynamically similar to the snippet belowAs seen here https://huggingface.co/docs/diffusers/training/lora
My complaint is there is a lot of redundancy in the model if you want different loras to be used.
The text was updated successfully, but these errors were encountered: