Bad inpaint effect with StableDiffusionControlNetInpaintPipeline #9022
Replies: 1 comment
-
I have solved this problem by increasing mask area. Thanks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,I used StableDiffusionControlNetInpaintPipeline to inpaint the mask, but it seems to it can't do a good job of inpainting. Can you help me see where the problem is?
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-"+config['controlnet_type'], torch_dtype=torch.float16)
pipe_inpaint = StableDiffusionControlNetInpaintPipeline.from_pretrained(config['sd_inpaint_path'], controlnet=controlnet, torch_dtype=torch.float16)
pipe_inpaint.scheduler = DDIMScheduler.from_config(pipe_inpaint.scheduler.config) pipe_inpaint.to("cuda") pipe_inpaint.scheduler.set_timesteps(config['num_inference_steps'], device=pipe_inpaint._execution_device)
outinpaint = pipe_inpaint( config['prompt'], num_inference_steps = config['num_inference_steps'], image = warpped_img, generator = generator, mask_image = blurred_mask, control_image = control_image, controlnet_conditioning_scale = 1.0 )
inpaint result:
blurred_mask
control_image
warpped_img
Logically speaking, the generated areas should all be roads, without any strange pillars.
Beta Was this translation helpful? Give feedback.
All reactions