Performing swap in latent space #158
wlvchandler
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible to use reactor to apply the swap before the decoding stage?
I guess the more important question is if it would be of any benefit (other than the speed of processing in latent vs. pixel spaces)?
It's very probable that I'm misunderstanding how the entire process works, but couldn't features of the target face be more accurately applied there? Or is that effectively what training a Lora is needed for?
Beta Was this translation helpful? Give feedback.
All reactions