Incompatible Safetensors Metadata and Weight Name Issues in SDXL ControlNet Setup #9668
-
The following code is part of my SDXL ControlNet execution code.
I have two questions: While running this code, I noticed that the Similarly, in pipe, it requires weights files to be named as The problem I'm facing is that the required model weight names are different between these two parts, which causes issues. Could you help me find a solution for this? I temporarily solved this issue by copying and pasting the same weights model (6.5GB) to make two files. However, a new problem has arisen:
I encounter this same issue even when trying with the original SDXL 1.0 base model, not just my finetuned model. It says there’s no file metadata. How can I resolve this issue? The versions of each library are as follows: transformers 4.45.2 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, usually when you use You're saying that the If you're using Also you're mixing more stuff, the error you're getting is with the pipeline, probably the It's hard to tell the specific error because you have unknowns to us, if you want more help you'll need to provide a full reproducible code with a link to the weights and the file structure. Are you sure you need If you still need to use something like this, you probably will need to load the models with vanilla pytorch. As an example, you can load the unet without problems using this code: import torch
from diffusers import AutoencoderKL, StableDiffusionXLPipeline, UNet2DConditionModel
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
unet = UNet2DConditionModel.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
subfolder="unet",
torch_dtype=torch.float16,
variant="fp16",
)
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", unet=unet, vae=vae, variant="fp16"
) If you want to load a custom unet, for example, the lightings ones, you can use something like this: import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
from diffusers import AutoencoderKL, StableDiffusionXLPipeline, UNet2DConditionModel
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
config = UNet2DConditionModel.load_config("stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet")
unet = UNet2DConditionModel.from_config(config)
unet.load_state_dict(
load_file(hf_hub_download("ByteDance/SDXL-Lightning", "sdxl_lightning_4step_unet.safetensors"), device="cuda")
)
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", unet=unet, vae=vae, variant="fp16"
) |
Beta Was this translation helpful? Give feedback.
Hi, usually when you use
UNet2DConditionModel.from_pretrained
is with a diffusersunet
model, which requires aconfig.json
, so this should be always a directory and not a file.You're saying that the
weights_name
doesn't get executed but this is not correct, thefrom_pretrained
doesn't have aweights_name
argument, so you're just passing a random argument.If you're using
from_pretrained
to load aunet
, it should be a directory that has aconfig.json
file and the model weights filename should be eitherdiffusion_pytorch_model.safetensors
ordiffusion_pytorch_model.fp16.safetensors
Also you're mixing more stuff, the error you're getting is with the pipeline, probably the
model
variable is …