Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Coco Metric for inference #1056

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

ai-fast-track
Copy link
Collaborator

I added some convenient methods to easily calculate the COCO metric at inference when ground truth records are available

infer_dl = model_type.infer_dl(valid_ds, batch_size=4, shuffle=False)
preds = model_type.predict_from_dl(model, infer_dl, keep_images=True)
inference_metric = COCOMetric(metric_type=COCOMetricType.bbox)

Get the COCO metric from the Predictions object:

logs = inference_metric.metric_from_preds(preds, print_summary=True)

You call also separately pass the ground_truth and the predictions:

gt_list = [pred.ground_truth for pred in preds]
preds_list = [pred.pred for pred in preds]

logs = inference_metric.metric(gt_list, preds_list, print_summary=True)

@codecov
Copy link

codecov bot commented Feb 3, 2022

Codecov Report

Merging #1056 (edaafe0) into master (9c17564) will decrease coverage by 0.07%.
The diff coverage is 52.94%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1056      +/-   ##
==========================================
- Coverage   85.96%   85.89%   -0.08%     
==========================================
  Files         283      283              
  Lines        5943     5954      +11     
==========================================
+ Hits         5109     5114       +5     
- Misses        834      840       +6     
Flag Coverage Δ
unittests 85.89% <52.94%> (-0.08%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
icevision/models/mmdet/utils.py 56.71% <0.00%> (ø)
icevision/data/dataset.py 90.90% <50.00%> (-5.76%) ⬇️
icevision/metrics/coco_metric/coco_metric.py 90.47% <63.63%> (-9.53%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9c17564...edaafe0. Read the comment docs.

@potipot
Copy link
Contributor

potipot commented Feb 3, 2022

I had such a use case, but cannot this be done with:

inference_metric = COCOMetric(print_summary=True)
inference_metric.accumulate(preds)
inference_metric_result = inference_metric.finalize()

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@@ -81,9 +81,9 @@ def param_groups(model):
layers += [model.bbox_head]

# YOLACT has mask_head and segm_head
if getattr(model, "mask_head"):
if getattr(model, "mask_head", False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did we set this one to False?
same thing for line 86

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit confusing. I had a discussion with Dickson in his PR. I had the reaction as you.

Right now, there is a bug because False is missing

If False is missing it will add mask_head even for retinanet model for example.

The way to read is if model has a mask_head the statement is True otherwise set it to False.

Copy link
Contributor

@FraPochetti FraPochetti Feb 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry man.
I still don't get it.

If False is missing it will add mask_head

But that's what we want. No?

Here how I understand the following statement:

if getattr(model, "mask_head"): layers += [model.mask_head]

if the model object has a "mask_head" attribute then add model.mask_head to the layers list.

What am I missing?

What is False needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The False is the default value of getattr if the object does not have the attribute, otherwise, an AttributeError will be raise if no default is provided. We should change getattr for hasattr as it is the function intended for the usecase.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, alright! Got it. Thanks Frederik.

@FraPochetti
Copy link
Contributor

I had such a use case, but cannot this be done with:

inference_metric = COCOMetric(print_summary=True)
inference_metric.accumulate(preds)
inference_metric_result = inference_metric.finalize()

Yeah, I guess it is the same. Just wrapping everything into a function call.

@ai-fast-track
Copy link
Collaborator Author

I didn't try that but I think it's better to have one simple and convenient method: No cognitive overload.

@potipot
Copy link
Contributor

potipot commented Feb 3, 2022

But looking at it again, it's the same number of lines of code and the metric.accumulate method is exactly for this purpose.
Take a look at the tests here:

to me this approach is explicit enough and I have used it before

@fstroth
Copy link
Contributor

fstroth commented Feb 3, 2022

I think the proposal from Farid is good, I would just rename metric_from_preds to evaluate_preds so the following line
logs = inference_metric.metric_from_preds(preds, print_summary=True)
would change to
logs = inference_metric.evaluate_preds(preds, print_summary=True)
I think this is easier to remember and more clear.
Furthermore, I would rename the metric function as the name is very ambiguous and does not reflect what happens in the function, also I would make it a _ function as we are basically providing two ways to do the same thing. I would rename it to something like _calculate_result. Maybe we should also rename the accumulate and finalize to _accumulate and _finalize so we have a very clear API with an obvious way how to use it.
Any thoughts?
This might require quite some changes to multiple parts of the codebase, so I would offer to do all the changes as they are beyond the scope of this PR.

@FraPochetti
Copy link
Contributor

I agree with @fstroth on everything he says.

Indeed a function named metric is extremely confusing.

As for @potipot 's point, why don't we add those 3 lines to the getting_started notebook just below Farid's solution?
I think it would show 2 different but very valid ways of achieving the same thing. A very powerful message.

inference_metric = COCOMetric(print_summary=True)
inference_metric.accumulate(preds)
inference_metric_result = inference_metric.finalize()

@fstroth
Copy link
Contributor

fstroth commented Jul 15, 2022

There is some user interest in this PR. What do we need to do to get this merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants