Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行tools/quantization_tool/examples/example.py,生成的mnist_model_example_optimized.onnx推理速度更慢而且所占用空间更大,请问是为什么? (AIV-608) #123

Closed
1Yanxiaolin1 opened this issue May 12, 2023 · 1 comment

Comments

@1Yanxiaolin1
Copy link

打印出来的结果如下:
accuracy of int8 model is: 0.977000
accuracy of fp32 model is: 0.977000
int8-model test time is 1.8347067832946777
float-model test time is 0.36646580696105957
Size of mnist_model_example_optimized.onnx: 439206 bytes
Size of mnist_model_example.onnx: 439119 bytes
有没有高人指点一下这是为什么???感谢!!!

@github-actions github-actions bot changed the title 运行tools/quantization_tool/examples/example.py,生成的mnist_model_example_optimized.onnx推理速度更慢而且所占用空间更大,请问是为什么? 运行tools/quantization_tool/examples/example.py,生成的mnist_model_example_optimized.onnx推理速度更慢而且所占用空间更大,请问是为什么? (AIV-608) May 12, 2023
@sun-xiangyu
Copy link
Collaborator

please try new esp-dl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants