Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solution of pycocotools mAP Alignment #7732

Closed
wants to merge 13 commits into from

Conversation

comlhj1114
Copy link

@comlhj1114 comlhj1114 commented May 8, 2022

This PR is the solution of pycocotools mAP Alignment [#2258]

To match pycocotools mAP calculation, following changes are applied

Solutions

  • set max_det=100
  • mod integration from np.trapz to rect
  • add false negatives to map calculation
  • choose bbox by iou, class by score from detections

Limitations

There is dataset inherited difference between yolo and coco style.
In the coco style iscrowd flag is available, while in the yolo style is not.
In the cocoapi, the flag is used for sampling detections after filtering maxDet, which results different number of detections used in calculating mAPs between yolo and cocoapi.

To evade this problem, annotations with iscrowd=1 is ignored. This can be done in two ways.

  • remove annotations with iscrowd=1 from instances_val2017.json file
  • add getAnnIds(iscrowd=False) flag in cocoeval.py of cocoapi

Results

With ignored (or removed) iscrowd annotations

[email protected]
(yolov5/cocoapi)
[email protected]:.95
(yolov5/cocoapi)
yolov5s_img320_iou.45_conf.001 0.465/0.466 0.295/0.295
yolov5s_img320_iou.65_conf.001 0.457/0.458 0.298/0.298
yolov5s_img640_iou.45_conf.001 0.558/0.559 0.362/0.362
yolov5s_img640_iou.65_conf.001 0.549/0.550 0.364/0.364
yolov5x_img320_iou.45_conf.001 0.600/0.600 0.423/0.423
yolov5x_img320_iou.65_conf.001 0.591/0.591 0.423/0.423
yolov5x_img640_iou.45_conf.001 0.670/0.670 0.489/0.489
yolov5x_img640_iou.65_conf.001 0.669/0.670 0.495/0.495
yolov5s_img320_iou.65_conf.250 0.394/0.394 0.266/0.266
yolov5s_img320_iou.65_conf.500 0.313/0.314 0.225/0.224
yolov5s_img320_iou.65_conf.800 0.153/0.153 0.125/0.125

With original annotations

[email protected]
(yolov5/cocoapi)
[email protected]:.95
(yolov5/cocoapi)
yolov5s_img320_iou.45_conf.001 0.465/0.470 0.296/0.297
yolov5s_img320_iou.65_conf.001 0.457/0.462 0.298/0.300
yolov5s_img640_iou.45_conf.001 0.557/0.565 0.361/0.365
yolov5s_img640_iou.65_conf.001 0.549/0.558 0.364/0.369
yolov5x_img320_iou.45_conf.001 0.604/0.608 0.426/0.427
yolov5x_img320_iou.65_conf.001 0.598/0.603 0.428/0.430
yolov5x_img640_iou.45_conf.001 0.670/0.677 0.489/0.494
yolov5x_img640_iou.65_conf.001 0.669/0.677 0.495/0.500
yolov5s_img320_iou.65_conf.250 0.393/0.395 0.265/0.266
yolov5s_img320_iou.65_conf.500 0.314/0.314 0.224/0.225
yolov5s_img320_iou.65_conf.800 0.152/0.152 0.125/0.125

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhancements to YOLOv5's evaluation metrics and non-maximum suppression process.

📊 Key Changes

  • Simplified the appending of sentinel values in the computation of Average Precision (AP) in utils/metrics.py.
  • Changed AP calculation to mean interpolation instead of trapezoidal integration, improving numerical stability.
  • Modified non-maximum suppression in val.py to include a maximum number of detections limit (max_det) set to 100.
  • Adjusted statistics gathering in the validation script to handle cases with no predictions more elegantly, ensuring correct label dimensions.

🎯 Purpose & Impact

  • 🚀 Enhance performance: These changes could lead to faster and more stable performance of the model during evaluation.
  • 📈 Improve metric accuracy: By refining the AP computation method, users can expect more accurate evaluation of object detection performance.
  • 🧮 Better handling of edge cases: The updates aim to provide cleaner and more reliable statistical outputs for cases with no predictions, aiding in better analysis and debuggability.
  • 🛑 Prevent overflow: The addition of a maximum detections cap helps prevent excessive memory usage or potential crashes due to too many predictions.

@comlhj1114 comlhj1114 mentioned this pull request May 8, 2022
@comlhj1114 comlhj1114 changed the title Solution of pycocotools mAP Alignment #2258 Solution of pycocotools mAP Alignment May 8, 2022
@comlhj1114
Copy link
Author

@glenn-jocher Hi there, are you checking this PR?

@glenn-jocher
Copy link
Member

@comlhj1114 thanks for the PR! This looks very thorough, I will take a look.

@comlhj1114
Copy link
Author

@glenn-jocher Thanks. I spent whole weekend for this work :)
If there is any question or suggestion, feel free to ask

@glenn-jocher
Copy link
Member

Updated with changes from #6787

@glenn-jocher
Copy link
Member

glenn-jocher commented May 22, 2022

@comlhj1114 I tested this PR after merging changes from earlier PR #6787. These are my results (did not modify pycocotools or annotations json):

Command

python val.py --weights yolov5s.pt --data coco.yaml --img 640 --iou 0.65 --half --device 0

Master

val: data=/content/yolov5/data/coco.yaml, weights=['yolov5s.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=0, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False
YOLOv5 🚀 v6.1-211-gcee5959 Python-3.7.13 torch-1.11.0+cu113 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...
100% 755k/755k [00:00<00:00, 52.0MB/s]
val: Scanning '/content/datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupt: 100% 5000/5000 [00:00<00:00, 10837.82it/s]
val: New cache created: /content/datasets/coco/val2017.cache
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100% 157/157 [00:55<00:00,  2.84it/s]
                 all       5000      36335      0.666      0.515      0.561      0.371
Speed: 0.1ms pre-process, 0.9ms inference, 1.5ms NMS per image at shape (32, 3, 640, 640)

Evaluating pycocotools mAP... saving runs/val/exp2/yolov5s_predictions.json...
loading annotations into memory...
Done (t=0.84s)
creating index...
index created!
Loading and preparing results...
DONE (t=5.95s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=79.89s).
Accumulating evaluation results...
DONE (t=14.68s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.375
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.568
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.406
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.211
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.423
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.490
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.311
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.519
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.574
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.385
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.633
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.731
Results saved to runs/val/exp2

PR

val: data=/content/yolov5/data/coco.yaml, weights=['yolov5s.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False
YOLOv5 🚀 v6.1-220-g452b370 Python-3.7.13 torch-1.11.0+cu113 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)

Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:00<00:00, 249MB/s]

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...
100% 755k/755k [00:00<00:00, 44.2MB/s]
val: Scanning '/content/datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupt: 100% 5000/5000 [00:00<00:00, 10890.51it/s]
val: New cache created: /content/datasets/coco/val2017.cache
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100% 157/157 [00:49<00:00,  3.14it/s]
                 all       5000      36335      0.666      0.515      0.555      0.368
Speed: 0.1ms pre-process, 0.9ms inference, 1.7ms NMS per image at shape (32, 3, 640, 640)

Evaluating pycocotools mAP... saving runs/val/exp/yolov5s_predictions.json...
loading annotations into memory...
Done (t=0.39s)
creating index...
index created!
Loading and preparing results...
DONE (t=2.78s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=47.04s).
Accumulating evaluation results...
DONE (t=8.83s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.373
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.564
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.405
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.208
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.422
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.489
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.307
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.504
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.543
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.335
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.602
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.702

@comlhj1114
Copy link
Author

@glenn-jocher Thank you for your works!
The difference between #6787 and this PR is how sort the detections.
In #6787, bbox and class are chosen accroding to iou scores.
In this PR, bbox are chosen according to iou scores, and class are chosen according to confidence score (see this commit ), which is done in pycocotools.

@github-actions
Copy link
Contributor

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions YOLOv5 🚀 and Vision AI ⭐.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Jun 27, 2022
@github-actions github-actions bot closed this Jul 2, 2022
@comlhj1114
Copy link
Author

comlhj1114 commented Jul 4, 2022

@glenn-jocher Hi glenn, why is this PR stale and closed? Any point to be improved?

@glenn-jocher
Copy link
Member

@comlhj1114 sorry, I didn't realize stale workflow was closing PRs. I've updated the period to 90 days for PRs in #8465

About this PR my main concern is that the updates are causing the pycocotools mAP to drop. Is there a way to keep the pycocotools mAP the same?

@glenn-jocher glenn-jocher reopened this Jul 4, 2022
@glenn-jocher glenn-jocher removed the Stale Stale and schedule for closing soon label Jul 4, 2022
@comlhj1114
Copy link
Author

comlhj1114 commented Jul 6, 2022

@glenn-jocher I don't understand what "the updates are causing the pycocotools mAP to drop" means.
If the sentence means results in the second table (With original annotations), AFAIK, the only solution is add iscrowd annotation in yolo dataset, but this is beyond this PR.

@github-actions
Copy link
Contributor

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions YOLOv5 🚀 and Vision AI ⭐.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Mar 22, 2023
@github-actions github-actions bot removed the Stale Stale and schedule for closing soon label Apr 10, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Oct 3, 2023

👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap.

We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved.

For additional resources and information, please see the links below:

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Oct 3, 2023
@github-actions github-actions bot closed this Nov 3, 2023
@glenn-jocher
Copy link
Member

@comlhj1114 understood. I appreciate your clarification. I will look into this further and get back to you. Thank you for your patience.

@Yonglin5170
Copy link

@glenn-jocher I don't understand what "the updates are causing the pycocotools mAP to drop" means. If the sentence means results in the second table (With original annotations), AFAIK, the only solution is add iscrowd annotation in yolo dataset, but this is beyond this PR.

Maybe @glenn-jocher is talking about the mAP dropping in his test results. I agree with you, and in pycocotools, tp is really choosen by confidence score if a gt box is matched with more than one detections.

@glenn-jocher
Copy link
Member

@Yonglin5170 Thank you for your input. It's important to ensure that any changes align with the expected behavior of pycocotools, especially regarding the handling of true positives and the confidence score. I'll review the PR again with this context in mind. Your contributions are valuable to the project, and I apologize for any confusion caused by the workflow automation. Let's continue to work towards a solution that maintains or improves mAP without deviating from established standards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale Stale and schedule for closing soon
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants