Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add coco metrics evaluation CLI #148

Merged
merged 6 commits into from
Aug 19, 2021
Merged

Add coco metrics evaluation CLI #148

merged 6 commits into from
Aug 19, 2021

Conversation

zhiqwang
Copy link
Owner

@zhiqwang zhiqwang commented Aug 19, 2021

For yolov5, run following script:

CUDA_VISIBLE_DEVICES=1 python tools/eval_metric.py \
    --num_gpus 1 \
    --image_path ./data-bin/mscoco/coco2017/val2017/ \
    --annotation_path ./data-bin/mscoco/coco2017/annotations/instances_val2017.json

The expected results of yolort.models.yolov5s:

The evaluated mAP 0.5:095 is 32.311, and mAP 0.5 is 51.168.

Note: Because we are using different pre-processing in yolort to make yolov5 torch.jit.scriptable, so the mAP is not equal to ultralytics/yolov5 now, checkout #92 for more details.

For torchvision, run following script:

CUDA_VISIBLE_DEVICES=1 python tools/eval_metric.py \
    --num_gpus 1 \
    --num_classes 91 \
    --batch_size 4 \
    --arch fasterrcnn_resnet50_fpn \
    --eval_type torchvision \
    --image_path ./data-bin/mscoco/coco2017/val2017/ \
    --annotation_path ./data-bin/mscoco/coco2017/annotations/instances_val2017.json

The expected results of torchvision.models.detection.fasterrcnn_resnet50_fpn:

The evaluated mAP 0.5:095 is 36.873, and mAP 0.5 is 58.495.
Click to display the metrics originally calculated by `torchvision`.
The evaluation scripts is copy-pasted from https://github.com/pytorch/vision/tree/master/references/detection.
CUDA_VISIBLE_DEVICES=1 python train.py \
    --dataset coco --data-path ./data-bin/mscoco/coco2017/ \
    --model fasterrcnn_resnet50_fpn \
    --batch-size 16 \
    --pretrained \
    --test-only
Averaged stats: model_time: 0.0727 (0.0692)  evaluator_time: 0.0046 (0.0084)
Accumulating evaluation results...
DONE (t=5.12s).
IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.369
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.585
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.396
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.212
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.403
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.482
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.307
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.485
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.509
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.317
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.544
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.649

Originally posted in #89 (comment)

@codecov
Copy link

codecov bot commented Aug 19, 2021

Codecov Report

Merging #148 (99466cc) into master (f34194c) will not change coverage.
The diff coverage is n/a.

❗ Current head 99466cc differs from pull request most recent head c09bfa9. Consider uploading reports for the commit c09bfa9 to get more accurate results
Impacted file tree graph

@@           Coverage Diff           @@
##           master     #148   +/-   ##
=======================================
  Coverage   84.49%   84.49%           
=======================================
  Files          12       12           
  Lines         819      819           
=======================================
  Hits          692      692           
  Misses        127      127           
Flag Coverage Δ
unittests 84.49% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f34194c...c09bfa9. Read the comment docs.

@zhiqwang zhiqwang added the enhancement New feature or request label Aug 19, 2021
@zhiqwang zhiqwang merged commit e1efb7c into master Aug 19, 2021
@zhiqwang zhiqwang deleted the verify-coco-metrics branch August 19, 2021 10:34
@zhiqwang zhiqwang mentioned this pull request Aug 19, 2021
1 task
@masahi
Copy link

masahi commented Aug 19, 2021

@zhiqwang Thank you very much! Especially for spending extra effort to enable evaluating torchvision models!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants