Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

accuracy has decreased after quatization #1

Open
Norooa opened this issue Jul 13, 2023 · 5 comments
Open

accuracy has decreased after quatization #1

Norooa opened this issue Jul 13, 2023 · 5 comments
Labels
question Further information is requested

Comments

@Norooa
Copy link

Norooa commented Jul 13, 2023

hello, I met the same problem that the accuracy of the model has decreased after quantization. But I can't figure out this problem through the repository in your github. I really want to know how do you solved this problem? thank you.

@Farzinkh Farzinkh added the question Further information is requested label Jul 13, 2023
@Farzinkh
Copy link
Owner

Hello, I will be glad if I can help you. just some questions:

  1. which model you are trying to quantize? if it is customized by you please share its architecture with me if it is possible.
  2. which SoC you are using ESP32 or ESP32-S3?
  3. and did you compile and test my project and received different results?

Thank you.

@Norooa
Copy link
Author

Norooa commented Jul 14, 2023

I'm going to compile and test your project but have not get the results yet.
The model I'm trying to quantize is very big, so I list part of it below, other parts are all convolution layers.
I have not compile my model and build it. But when I run the quantization_tools/example.py, I find the accuracy has decreased a lot after quantization.
image
And I try to quantize your model, I find that there is no data to get the quantized model. So do you have the problem that the accuracy has decreased after you quantize the onnx model ?
part of my architecture is listed below(it's too big to show the whole)
image

@Farzinkh
Copy link
Owner

Let me know when you tried my project. Test and calibration datasets are compressed in .7z files. Extract them before using the model_builder script. And regarding to quantization tool of ESP-DL obviously something is wrong with it. Even when I achived 99% accuracy on my real benchmark on ESP32 it has the result of below for ESP32S3.
Screenshot from 2023-07-14 14-39-51

However, the same model with such great accuracy (99%) still has a significant drop in accuracy on the ESP32S3 real benchmark and I am still working on it. I think it is a bug for ESP-DL quantization Tool which is not open source unfortunately. I need to take care of some other experiments and will create an issue for it on ESP-DL repository if I made sure.

Regarding your model it seems you are using RGB image am I correct? I suggest deploying your model on ESP32 first and running a real benchmark like mine not simulation or if you can share the onnx model create a pull request and provide some datasets then I can test it.

@Norooa
Copy link
Author

Norooa commented Jul 17, 2023

Thank you for your help. I'm trying to figure the problem. I will contact you if I have progress.

@Farzinkh
Copy link
Owner

Farzinkh commented Aug 8, 2023

Hi @Norooa,

I found the problem with my project and finally managed to solve it. The ESP-DL quantization tool is ok I had some mistype conversations between INT8 and unsigned INT8 both in the training stage and running benchmark.
ESP-DL quantization tool new results
Also, I provided results for the ESP32S3 benchmark. Of course, you can try both MNIST and Hand_gesture_recognition projects for ESP32 or ESP32S3 and let me know if you find any problem. Are you still struggling with accuracy drop after quantization?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants