-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visual artifacts when using DPM++
schedulers and SDXL without the refiner model
#5433
Comments
Would be good to get this one fixed, as it's been a problem since the SDXL launch. |
could you try use this script instead? a few notes:
I generated a few images, they looks fine to me but I might have less trained eyes. so let me know if the problem still persist import torch
from diffusers import StableDiffusionXLPipeline, StableDiffusionPipeline
from typing import cast
from diffusers import DPMSolverMultistepScheduler
pipe = StableDiffusionXLPipeline.from_pretrained(
'stabilityai/stable-diffusion-xl-base-1.0',
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
add_watermarker=False)
pipe = pipe.to('cuda')
pipe.scheduler = DPMSolverMultistepScheduler.from_config(
pipe.scheduler.config, algorithm_type="sde-dpmsolver++"
)
generator = torch.Generator(device='cuda').manual_seed(12345)
params = {
"prompt": ['a cat'],
"num_inference_steps": 50,
"guidance_scale": 7,
}
sdxl_img = pipe(**params, generator=generator).images[0]
sdxl_img.save(f"sdxl_dpm_out.png") |
@CodeCorrupt can you try this? I think they are gone for real this time but let me know if it's not...... import torch
from diffusers import StableDiffusionXLPipeline, StableDiffusionPipeline
from typing import cast
from diffusers import DPMSolverMultistepScheduler
pipe = StableDiffusionXLPipeline.from_pretrained(
'stabilityai/stable-diffusion-xl-base-1.0',
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
add_watermarker=False)
pipe = pipe.to('cuda')
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
"runwayml/stable-diffusion-v1-5",
subfolder="scheduler")
seed = 1
generator = torch.Generator(device='cuda').manual_seed(seed)
params = {
"prompt": ['a cat'],
"num_inference_steps": 50,
"guidance_scale": 7,
}
sdxl_img = pipe(**params, generator=generator).images[0]
sdxl_img.save(f"out_{seed}.png") |
Hey @yiyixuxu Looks like it's not using the right import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepScheduler, DPMSolverSinglestepScheduler
pipe = StableDiffusionXLPipeline.from_pretrained(
'stabilityai/stable-diffusion-xl-base-1.0',
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
add_watermarker=False)
pipe = pipe.to('cuda')
common_config = {'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear'}
schedulers = {
"DPMPP_2M": (DPMSolverMultistepScheduler, {}),
"DPMPP_2M_K": (DPMSolverMultistepScheduler, {"use_karras_sigmas": True}),
"DPMPP_2M_SDE": (DPMSolverMultistepScheduler, {"algorithm_type": "sde-dpmsolver++"}),
"DPMPP_2M_SDE_K": (DPMSolverMultistepScheduler, {"use_karras_sigmas": True, "algorithm_type": "sde-dpmsolver++"}),
"DPMPP_SDE": (DPMSolverSinglestepScheduler, {}),
"DPMPP_SDE_K": (DPMSolverSinglestepScheduler, {"use_karras_sigmas": True}),
}
selected_scheduler = 'DPMPP_2M_SDE'
scheduler_old = schedulers[selected_scheduler][0](**common_config, **schedulers[selected_scheduler][1])
scheduler_new = schedulers[selected_scheduler][0].from_pretrained(
"runwayml/stable-diffusion-v1-5",
subfolder="scheduler",
**schedulers[selected_scheduler][1],
)
params = {
"prompt": ['a cat'],
"num_inference_steps": 50,
"guidance_scale": 7,
}
for s in [scheduler_new, scheduler_old]:
seed = 12345
generator = torch.Generator(device='cuda').manual_seed(seed)
pipe.scheduler = s
sdxl_img = pipe(**params, generator=generator).images[0]
display(sdxl_img) I'm still seeing the same artifacts when using the |
@CodeCorrupt @LuChengTHU can you aso take a look into this? I can reproduce this bug |
@CodeCorrupt here is my setting - do you see same thing in automatic1111 as well? |
@CodeCorrupt below, num_inference_steps = 60,70,80,100. It gradually reduce and I think completely disappeared at 100 steps Since the same issue also present in k-diffusion/auto1111, and also because of the the fact that it went away when we increase the number of inference steps, I think this is probably not a bug in the implementation. It could be just this scheduler does not work well with SDXL. I would be curious to understand why and hopefully @LuChengTHU have some insights to share soon :) but I think there is not much action for us to take here also cc @patrickvonplaten here, let me know if we should investigate this further |
Have we confirmed that swapping the default VAE to use this one doesn't help at all? |
confirmed |
DPM++ scheduler is known to not work super well for SDXL. Euler is usually the better choice |
let us know if there is anything you want us to investigate more:) |
DPM++ cleans up on FID when comparing it to other methods like Euler for the same number of steps. It would be nice if we could get it working because we would save compute when doing inference. |
@AmericanPresidentJimmyCarter Noob question - what is "FID"? |
Never mind - found it! https://en.wikipedia.org/wiki/Fr%C3%A9chet_inception_distance |
Yeah, when training new text to image models we always use thing sampler because it produces the best (lowest) FID values. It works very well for SD1.x/2, so it would be good to figure out what is causing the issue with SDXL. |
Hey @yiyixuxu It would be great if we could find the root cause and get the DPM++ schedulers to "work super great" on SDXL 😄. I'm doing some research myself, but I imagine it would be far more efficient if you and the team could look into it since you have the domain knowledge here. |
Hi @patrickvonplaten , is there any more examples / findings about this conclusion? I will try to figure out the reason :) |
I'm not fully sure why it happens. The effect seems to go away a bit when using higher inference step sizes (e.g. 50), but with just 25 steps there seem to always be some artifacts. It would be incredible if you could dive a bit deeper here ❤️ |
Hi guys, I've found the reason and created a PR to fix this issue. I think now DPM++ can work greatly well for SDXL :) Please check and try this: #5541 |
Describe the bug
All DPM++ schedulers are showing visual artifacts out of the base model when
denoising_end=1
(skipping the refiner). This effect is most notable withDPM++ 2M SDE
configured using the flag from the docs.These same artifacts are not seen when using SD1.5 with the same scheduler configuration.
Reproduction
Intended to run in a notebook
Logs
No response
System Info
diffusers
version: 0.21.4Who can help?
@yiyixuxu @patrickvonplaten
The text was updated successfully, but these errors were encountered: