Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inpaint SDXL dont work #210

Closed
kalle07 opened this issue Dec 31, 2023 · 3 comments
Closed

inpaint SDXL dont work #210

kalle07 opened this issue Dec 31, 2023 · 3 comments

Comments

@kalle07
Copy link

kalle07 commented Dec 31, 2023

i have succesfuly created normal SDXL checkppoints with tenso and worls fine and fast !

but with the new inpaint model doesnt work:
the new creation was made like this:
AUTOMATIC1111/stable-diffusion-webui#14390 (comment)
and works fine in normal use, but not with tensorRT

....

Loading weights [1a62729227] from D:\stable-A1111-DEV\stable-diffusion-webui\models\Stable-diffusion\inpaint\SDXL_Inpaint_juggerV7.safetensors
Creating model from config: D:\stable-A1111-DEV\stable-diffusion-webui\configs\sd_xl_inpaint.yaml
Applying attention optimization: sdp-no-mem... done.
Model loaded in 10.9s (create model: 0.4s, apply weights to model: 9.7s, apply half(): 0.2s, move model to device: 0.2s, calculate empty prompt: 0.2s).
Exporting inpaint_SDXL_Inpaint_juggerV7 to TensorRT
{'sample': [(1, 4, 88, 88), (2, 4, 128, 128), (2, 4, 176, 160)], 'timesteps': [(1,), (2,), (2,)], 'encoder_hidden_states': [(1, 77, 2048), (2, 77, 2048), (2, 154, 2048)], 'y': [(1, 2816), (2, 2816), (2, 2816)]}
No ONNX file found. Exporting ONNX...
Disabling attention optimization
ERROR:root:Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 128, 128] to have 9 channels, but got 4 channels instead
Traceback (most recent call last):
File "D:\stable-A1111-DEV\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 84, in export_onnx
torch.onnx.export(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\onnx\utils.py", line 516, in export _export(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\onnx\utils.py", line 1596, in _export
graph, params_dict, torch_out = _model_to_graph(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\onnx\utils.py", line 1135, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\onnx\utils.py", line 1011, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\onnx\utils.py", line 915, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\jit_trace.py", line 1285, in _get_trace_graph
outs = ONNXTracedModule(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\jit_trace.py", line 133, in forward
graph, out = torch._C._create_graph_by_tracing(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\jit_trace.py", line 124, in wrapper
outs.append(self.inner(*trace_inputs))
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1508, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\stable-A1111-DEV\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "D:\stable-A1111-DEV\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
h = module(h, emb, context)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1508, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\stable-A1111-DEV\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
x = layer(x)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\module.py", line 1508, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\stable-A1111-DEV\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 509, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 128, 128] to have 9 channels, but got 4 channels instead

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread(
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\stable-A1111-DEV\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 135, in export_unet_to_trt
export_onnx(
File "D:\stable-A1111-DEV\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 129, in export_onnx
exit()
File "C:\ProgramData\anaconda3\envs\stable-diffusion-webui\lib_sitebuiltins.py", line 26, in call
raise SystemExit(code)
SystemExit: None

@jeanhadrien
Copy link

yup

@contentis
Copy link
Collaborator

Can you test the dev branch? It should be able to handle XL inpaint. Otherwise, these changes will be merged into the master soonish anyways.

@kalle07
Copy link
Author

kalle07 commented Jan 2, 2024

i will wait ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants