Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force output to have chw layout #1973

Closed
devalexqt opened this issue Jun 16, 2022 · 8 comments · Fixed by #1979
Closed

Force output to have chw layout #1973

devalexqt opened this issue Jun 16, 2022 · 8 comments · Fixed by #1979
Assignees
Labels
enhancement New feature or request question An issue, pull request, or discussion needs more information

Comments

@devalexqt
Copy link

devalexqt commented Jun 16, 2022

Even if I specify my input is chw, output still has hwc layout.
How to fix this?

python -m tf2onnx.convert  --opset 13 --saved-model  model --output  model.onnx --inputs input:0[1,256,256,3] --inputs-as-nchw input:0 --rename-inputs input

onnx_input-chw
onnx_output-edit

@hwangdeyu
Copy link
Contributor

hi @devalexqt Could you share your save_model then we can take a look if it‘s something about optimizer in tf2onnx?

@hwangdeyu hwangdeyu added the pending on user response Waiting for more information or validation from user label Jun 21, 2022
@devalexqt
Copy link
Author

Model can be downloaded here:
https://drive.google.com/file/d/1tdim-rnz6xK2zbZ-_NO9FW6__wF0DRCU/view?usp=sharing
It's just simple autoencoder model with bunch of Conv/Deconv layers, nothing special.

@hwangdeyu hwangdeyu added question An issue, pull request, or discussion needs more information and removed pending on user response Waiting for more information or validation from user labels Jun 22, 2022
@hwangdeyu
Copy link
Contributor

hwangdeyu commented Jun 22, 2022

Thanks @devalexqt for providing it, the --inputs-as-nchw is only for specifying input as nchw to match the ONNX standard.
Now tf2onnx don't support --outputs-as-nchw so far, we have not seen much models where this applies.
We can do it manually or add an new option. Could you tell us which scenario requires it?

@hwangdeyu hwangdeyu self-assigned this Jun 22, 2022
@devalexqt
Copy link
Author

Thanks.
In my case it's about performance, as you can see provided model is for upscale image by 4x, where output is about 1024x1024 or even 2048x2048. And output transpose layer can take up to 50% of total model execution time, and then I need transpose it again to chw, so I just lost lot of milliseconds.
In production I use converted onnx model on CPU where chw format is up to 2x faster in comparison to hwc.

Please add option to force output to chw.

@hwangdeyu hwangdeyu added the enhancement New feature or request label Jun 23, 2022
@andife
Copy link
Member

andife commented Jun 24, 2022

I would also be very interested in this functionaliy, also because of speed issues. My use case would be semantic segmentation.
With for example using the dnn module of opencv with onnx, it is quite nice using selecting the output layer for different classes when I have the nchw format. For the current hwc format I've to manually reorder the output, which is quite slow.

@hwangdeyu
Copy link
Contributor

I would also be very interested in this functionaliy, also because of speed issues. My use case would be semantic segmentation. With for example using the dnn module of opencv with onnx, it is quite nice using selecting the output layer for different classes when I have the nchw format. For the current hwc format I've to manually reorder the output, which is quite slow.

Thanks for sharing your case, we'll add this feature later.

@hwangdeyu
Copy link
Contributor

hwangdeyu commented Jul 8, 2022

@devalexqt We have added --outputs_as_nchw option which is supported it in latest main branch. Please tell me if it's work for your model demand Thanks!

@devalexqt
Copy link
Author

Thnks!
Now its working as expected!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question An issue, pull request, or discussion needs more information
Projects
None yet
3 participants