You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@darrenzhang1007
How yolov7_qat.onnx was obtained during the Run QAT benchmark process.
After you finish the qat training via qat.py, You could got the qat.onnx. The guidance is mainly telling you: For current version of TensorRT, User can always got the best perf on PTQ(onnx model without QDQ nodes), But QAT will not, In this way, if we want to get the same perf as PTQ in QAT model(model with QDQ nodes), We should adjust the QDQ placement to let the TensorRT have the same behavior as PTQ(they export the same graph)
The input dimensions of the onnx exported in the two tutorials are different. It makes me confuse!
you can see the https://github.com/WongKinYiu/yolov7.git, When running the evaluation, it will run with 13672*672(that will got the best accuracy, seems all the yolo will do this), we just keep aligned with them.
Very appreciate having such a good job. I want to report a bug for cmd_sensitive_analysis in qat.py
When calling the quantize.calibrate_model function, a
device
parameter is underwritten, causing the run to fail.yolo_deepstream/yolov7_qat/scripts/qat.py
Line 243 in bd731fd
The text was updated successfully, but these errors were encountered: