Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add inception-v4 to supported models #49

Merged
merged 2 commits into from
May 18, 2018

Conversation

kuke
Copy link

@kuke kuke commented May 16, 2018

-----------  Configuration Arguments -----------
a: 0.0
b: 1.0
backend: caffe2
batch_size: 10
expected_decimal: 5
fluid_model: extras/inception_v4.inference.model/
onnx_model: inception_v4.onnx
------------------------------------------------
Inference results for fluid model:
[array([[0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       ...,
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415]], dtype=float32)]


Inference results for ONNX model:
Outputs(_0=array([[0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       ...,
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415],
       [0.0096415, 0.0096415, 0.0096415, ..., 0.0096415, 0.0096415,
        0.0096415]], dtype=float32))


The exported model achieves 5-decimal precision.

On flowers dataset. Not validated on TensorRT backend for its Concat op implementation doesn't support inputs with different shape

@kuke kuke requested a review from varunarora May 16, 2018 05:50
Copy link
Collaborator

@varunarora varunarora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Is the TensorRT Concat problem a specific TensorRT problem or was it unsupported in the spec in 1.0.1?

@kuke
Copy link
Author

kuke commented May 17, 2018

It should be the problem of TensorRT. Run unit test of Concat op with the tensorrt backend, it will give the error:

  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 106, in run
    outputs = self.engine.run(inputs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/tensorrt_engine.py", line 113, in run
    (i, expected_shape, given_shape))
ValueError: Wrong shape for input 0. Expected (1, 4, 5), got (3, 4, 5).

Here the error happens:

https://github.com/PaddlePaddle/paddle-onnx/blob/4d9dd93d05655a4e25fc588569b95ef64e34eeeb/tests/test_concat_op.py#L31-#L36

@varunarora
Copy link
Collaborator

Yeah just report on their GitHub if you get a chance, and we should be good to go.

@kuke
Copy link
Author

kuke commented May 18, 2018

@varunarora OK. I am going to report this issue.

@kuke kuke merged commit 581eafa into PaddlePaddle:develop May 18, 2018
Zeref996 added a commit to Zeref996/Paddle2ONNX that referenced this pull request Aug 24, 2021
* add lzy base api

* fix case lzy

* fix test_is_empty

* rm base case in nn

Co-authored-by: Divano <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants