Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Is it possible to load LORAs to be used in the ONNX pipelines? #2493

Closed
GreenLandisaLie opened this issue Feb 25, 2023 · 5 comments
Closed
Labels
stale Issues that haven't received updates

Comments

@GreenLandisaLie
Copy link

A while ago I've asked about Textual Inversion embeddings and the answer was basically 'no'. There was a way around it but it wasn't an actual solution. However this was some time ago and things might have changed.
So, is it possible now? And if so is there any documentation on this matter I can read?
Sorry this is not an actual issue.

@GBalogo
Copy link

GBalogo commented Feb 26, 2023

AFAIK, nothing has changed in that you'd have to write your own code to "load" LoRAs into an ONNX pipeline, and that would be a massive headache prone to breaking, even to support exclusively the default diffusers style LoRAs. There are "solutions" to convert LoRAs from other formats to diffusers format, then inject into a model before converting to ONNX, but most people would give up before figuring it out, I'd think.

@GreenLandisaLie
Copy link
Author

GreenLandisaLie commented Feb 27, 2023

There are "solutions" to convert LoRAs from other formats to diffusers format, then inject into a model before converting to ONNX

Yap, that's exactly what I had to do with Textual Inversion embeddings. It worked but the resulting models would have uneven layer sizes and if I tried to convert an 'injected' model back to .ckpt and use it to merge with other models it would fail. Plus, embeddings should really only be loaded on demand - specially LORAs.
With LORAs people adjust their overall weights in accordance to their needs - something that would be impossible with that approach so that wouldn't do.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Mar 28, 2023
@ssube
Copy link
Contributor

ssube commented Mar 29, 2023

This is possible, but only if the ONNX model's nodes and initializers have their original names. Running some of the ORT optimization scripts will rename them and makes it very difficult to line things up. I have some notes on the nodes, names, etc in ssube/onnx-web#213

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this as completed May 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

3 participants