Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fix mAP0.5-0.95 #6787

Merged
merged 10 commits into from
May 20, 2022
Merged

Bug fix mAP0.5-0.95 #6787

merged 10 commits into from
May 20, 2022

Conversation

lebedevdes
Copy link
Contributor

@lebedevdes lebedevdes commented Feb 26, 2022

Two changes provided

  1. Added limit on the maximum number of detections for each image likewise pycocotools
  2. Rework process_batch function

Changes #2 solved issue #4251
I also independently encountered the problem described in issue #4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
Class Images Labels P R [email protected] [email protected]:.95: 100%|██████████| 157/157 [01:07<00:00, 2.33it/s]
all 5000 36335 0.743 0.626 0.682 0.506
from pycocotools
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.505
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue #2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhancements to type conversions and matching criteria in YOLOv5 validation code.

📊 Key Changes

  • Updated numpy type conversions from .astype('int32') and .astype(np.int16) to .astype(int).
  • Refactored matching logic in process_batch to iterate over different IoU (Intersection over Union) thresholds.
  • Changed tensor indexing to use .astype(int) in val.py, ensuring compatibility across data types.

🎯 Purpose & Impact

  • The type conversion change ensures more general integer representation for better compatibility across platforms.
  • The improved matching logic allows for more precise performance evaluation at different IoU thresholds, likely leading to more accurate model validation results.
  • These updates may provide a slight performance increase and ensure more consistent behavior in different environments, benefiting both developers working with the code and users relying on the model's accuracy.

Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes ultralytics#2 solved issue ultralytics#4251
I also independently encountered the problem described in issue ultralytics#4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue ultralytics#2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hello @lebedevdes, thank you for submitting a YOLOv5 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • ✅ Verify your PR is up-to-date with upstream/master. If your PR is behind upstream/master an automatic GitHub Actions merge may be attempted by writing /rebase in a new comment, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
# git checkout feature  # <--- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f
  • ✅ Verify all Continuous Integration (CI) checks are passing.
  • ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee

@lebedevdes lebedevdes changed the title Improve mAP0.5-0.95 Bug fix mAP0.5-0.95 Mar 23, 2022
@glenn-jocher
Copy link
Member

@lebedevdes resolved conflicts and updated PR. I will take a look at this, I don't know why it's been left open so long.

@glenn-jocher glenn-jocher added the TODO High priority items label May 19, 2022
@glenn-jocher
Copy link
Member

@lebedevdes it seems like the pred = pred[:100] line is reducing the pycocotools mAP. If I remove it pycocotools mAP remains unchanged while YOLOv5 mAP slightly reduces as well, which is probably the preferred solution as it still represents a big improvement over current.

@glenn-jocher glenn-jocher merged commit 43569d5 into ultralytics:master May 20, 2022
@glenn-jocher
Copy link
Member

@lebedevdes PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐

@glenn-jocher glenn-jocher removed the TODO High priority items label May 20, 2022
@lebedevdes
Copy link
Contributor Author

@glenn-jocher Glad to help, thanks for your great project)

tdhooghe pushed a commit to tdhooghe/yolov5 that referenced this pull request Jun 10, 2022
* Improve mAP0.5-0.95

Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes ultralytics#2 solved issue ultralytics#4251
I also independently encountered the problem described in issue ultralytics#4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue ultralytics#2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove line to retain pycocotools results

* Update val.py

* Update val.py

* Remove to device op

* Higher precision int conversion

* Update val.py

Co-authored-by: Glenn Jocher <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
* Improve mAP0.5-0.95

Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes ultralytics#2 solved issue ultralytics#4251
I also independently encountered the problem described in issue ultralytics#4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue ultralytics#2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove line to retain pycocotools results

* Update val.py

* Update val.py

* Remove to device op

* Higher precision int conversion

* Update val.py

Co-authored-by: Glenn Jocher <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
@glenn-jocher
Copy link
Member

@lebedevdes thanks for your kind words, but the real credit goes to the YOLO community and the Ultralytics team. We greatly appreciate your contributions! If you have any more ideas or feedback, feel free to share them with us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants