-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SD3 missing support from-single-file #8546
Comments
Hi @vladmandic |
ahh, sorry, i was looking at commit log before opening an issue and somehow missed that. te3 loading from
also, do you have plan to release diffusers==0.29.1 patch release with all the extra work that has been going on since release? |
yep, we will do a patch soon once our single-file support is "vlad-approved" 😁 and this one is in #8506 |
cc @DN6 for the fp8 failure |
@vladmandic Could you try installing from main? I'm able to load the FP8 checkpoint on my end. |
@DN6 I can load it now, but it does not load TE3 at all. pipe = diffusers.StableDiffusion3Pipeline.from_single_file('sd3_medium_incl_clips_t5xxlfp8.safetensors')
print('TE1', pipe.text_encoder)
print('TE2', pipe.text_encoder_2)
print('TE3', pipe.text_encoder_3) you can see that |
@DN6 I can reproduce this too |
@vladmandic can you check if this works now? #8631 |
confirmed as working with that fix. |
Describe the bug
StableDiffusion3Pipeline
does implementfrom_single_file
which correctly loads DiT and VAE.however, it fails to deal with any of the text-encoders: TE1, TE2 and TE3.
sd3_medium.safetensors
that is understandable as that model does have any TEs baked insd3_medium_incl_clips.safetensors
expectation is that TE1 and TE2 would be loaded correctly and TE3 would be skipped. true, load does not fail, but nothing actually works. see reproductionsd3_medium_incl_clips_t5xxlfp8.safetensors
expectation is same as above plus that FP8 version of TE3 would be correctly loaded - right now that is not yet done and TE3 must be loaded separately.Reproduction
results in runtime error on
pipe.to('cuda')
or if model is not moved:
enabling two lines that load TE1 and TE2 make model actually work without issues.
for TE3, attempting to load
sd3_medium_incl_clips_t5xxlfp8
results in the same error, so loading it manually is the only way:all-in-all, this totally defeats the point of using
from_single_file
as loading as TE1, TE2 and TE3 have to be manually added to model by loading usingfrom_pretrained
.Logs
No response
System Info
diffusers==0.29.0
torch==2.3.1
cuda==12.1
ubuntu==24.04
Who can help?
@yiyixuxu @sayakpaul @DN6
The text was updated successfully, but these errors were encountered: