-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
missing metadata when converting safetensors #191
Comments
This was made using https://github.com/kohya-ss/sd-scripts and the keys in the file will probably be different. |
Good to know, thanks. I was aware of that repo, but didn't think to retry the failures there. Are there some known/significant keys that can safely be used to programmatically tell which scripts produced a particular model? |
@ssube That would be awesome. We need something similar to an API specification that outlines the various properties of the outputs. |
I suspect the difference is I can't promise this is all correct, but most of what I've found so far is documented in https://github.com/ssube/onnx-web/blob/main/docs/converting-models.md |
can we use monkey patch on lora trained by sd-scripts? |
still having this issue. Please fix this or help us use the Lora trained with kohya-ss GUI with diffusers I don't want to install AUTOMATIC1111 webUI just to run Lora inference... |
|
I've made some progress and learned some things, but many of them are for ONNX models. It's possible to load and blend the LoRA weights with the base model at runtime, in either PyTorch or ONNX format, as long as you have the correct node names: ssube/onnx-web#213 The LoRAs produced by the sd-scripts have all of the necessary names, but the ones from this repo seem to use the index instead, f.ex https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py#L301 . That makes it a little bit more difficult to find the right nodes but the math is otherwise the same. |
@ssube were you able to use the LoRA at runtime with Python? Please show an example ❤️ |
hi I am also encountering the problem that can't use lora from civitai. My problem is the keys of loras do not match the keys used in this reop. Is there any way to convert civitai type lora to the type of lora which is supported by this repo? |
My original question has been answered, so I'm going to close this issue, but here's a quick dump of everything I learned along the way:
Some example keys for this repo:
From another LoRA, not from this repo:
And from a Hadamard-product LyCORIS, just for completeness:
|
I'm trying to use this repo to merge a bunch of LoRA weights into their base models, as the first step in a long and grueling conversion to ONNX. It's working for some files, but failing on many of the
.safetensors
that I try and complaining about a lack of metadata for the weights.The example in the diffusers docs, https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4 works, works just fine and produces a merged model directory. I've been grabbing files tagged with LoRA from Civitai for testing, and https://civitai.com/models/8039/jackscape-samurai-jack-background-style-lora is a smaller one that fails. I checked out develop and installed it in a venv, but there's no difference between the latest develop and the last release.
The file is a valid safetensor, I can load and inspect it with the library, but it doesn't have a whole lot of metadata:
This looks similar to #141, which was closed by OP with a link to https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/discussions/3#63c69d6a02d8c96233359025, but that doesn't offer much more detail.
Am I missing something, or are the tensor files?
The text was updated successfully, but these errors were encountered: