-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mAP Metrics and validation #89
Conversation
Codecov Report
@@ Coverage Diff @@
## master #89 +/- ##
=========================================
Coverage ? 81.80%
=========================================
Files ? 8
Lines ? 775
Branches ? 0
=========================================
Hits ? 634
Misses ? 141
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
6e325a8
to
a78e4ac
Compare
Hi @zhiqwang, do you happen to have a script to evaluate yolo5 on coco 2017 dataset? I also want to evaluate torchvision faster rcnn on the same dataset. I'm working on accuracy validation of TVM fp16 quantization. |
Hi @masahi , I have the evaluate script on COCO2017, I need to clean it and expect to upload it tomorrow. |
@zhiqwang I was able to get reasonably looking numbers by modifying the input path to the data loader in Another question: I tried to use the same eval script to test torchvision fasterrcnn. But I hit an error at this line https://github.com/zhiqwang/yolov5-rt-stack/blob/f4f2e6aebec97d72ec40051aac31f217fcf01ea0/yolort/data/coco_eval.py#L201 because the label there is 86 and |
Hi @masahi , Check #148 for the coco metrics evaluation CLI. And I add a data preprare function in there. Let's move the disscussion there.
This is because the original categories number of COCO is 90, check the I changed the and use |
Fixing #59