You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I would like to contribute some basic ordinal classification metrics like "accuracy within"
Motivation
I recently discovered that my problem is an instance of ordinal classification problem. Which means that classes with labels that are close, are also semantically close (like star rating 1 and 2 start are not really far apart). And I realized that torchmatrics doesn't implement for example "accuracy within k" (or off-by k), which is stricter version of top-k accuracy. More on that can be found here: https://link.springer.com/chapter/10.1007/978-3-642-01818-3_25
Pitch
I would like to contribute at least Accuracy/Recall/Precision in "within k" version, but I am not sure if I should disrupt original classification metrics by adding more parameter. It seems like logical solution, but I am not sure how much you would be open to disrupting core metrics implementations.
Essentially, e.g. true positive definition would change from (pred == label).sum() to (torch.abs(pred - label) <= within)sum()
Alternatives
I can also implement them separately or just do it for my project :)
Additional context
In my context I am doing IVF image stage classification and stages are visually very similar, because some of the stages differ only by single cell, which might be actually hidden, since its a 2d top projection of a 3d object.
The text was updated successfully, but these errors were encountered:
🚀 Feature
Hi I would like to contribute some basic ordinal classification metrics like "accuracy within"
Motivation
I recently discovered that my problem is an instance of ordinal classification problem. Which means that classes with labels that are close, are also semantically close (like star rating 1 and 2 start are not really far apart). And I realized that
torchmatrics
doesn't implement for example "accuracy within k" (or off-by k), which is stricter version of top-k accuracy. More on that can be found here: https://link.springer.com/chapter/10.1007/978-3-642-01818-3_25Pitch
I would like to contribute at least Accuracy/Recall/Precision in "within k" version, but I am not sure if I should disrupt original classification metrics by adding more parameter. It seems like logical solution, but I am not sure how much you would be open to disrupting core metrics implementations.
Essentially, e.g. true positive definition would change from
(pred == label).sum()
to(torch.abs(pred - label) <= within)sum()
Alternatives
I can also implement them separately or just do it for my project :)
Additional context
In my context I am doing IVF image stage classification and stages are visually very similar, because some of the stages differ only by single cell, which might be actually hidden, since its a 2d top projection of a 3d object.
The text was updated successfully, but these errors were encountered: