-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zero_division
option to the precision, recall, f1, fbeta.
#2198
Add zero_division
option to the precision, recall, f1, fbeta.
#2198
Conversation
Hi @i-aki-y, |
@SkafteNicki Thank you for your comment. I tried refactoring StatScores to have zero_division. I avoid declaring ex. class BinaryStatScores(_AbstractStatScores):
...
def __init__(
self,
threshold: float = 0.5,
multidim_average: Literal["global", "samplewise"] = "global",
ignore_index: Optional[int] = None,
validate_args: bool = True,
**kwargs: Any,
) -> None:
zero_division = kwargs.pop("zero_division", 0)
super(_AbstractStatScores, self).__init__(**kwargs)
if validate_args:
_binary_stat_scores_arg_validation(threshold, multidim_average, ignore_index, zero_division) |
@i-aki-y seems that the results are different then expected... |
@Borda Thanks I found some bugs and fixed them. Since the PR (scikit-learn/scikit-learn#27577) was merged, I confirmed that the fix passes related test cases (ex. |
@i-aki-y could you pls have a look at all the failing cases, some with the wrong value... |
zero_division
option to the precision, recall, f1, fbeta.zero_division
option to the precision, recall, f1, fbeta.
@Borda OK, I put the [WIP] in this PR title. As mentioned above, the current sklearn (1.3.2) has a bug that mishandles zero_division. |
Appears this has gone cold? Keen to see support for zero_division elsewhere too, particularly JaccardIndex |
@SkafteNicki let's revive this |
@robmarkcole The jaccard index seems to be implemented by using |
@robmarkcole and @i-aki-y I added support in jaccard index now. |
The PR has been failing for the |
…tning-AI#2198) * Add support of zero_division parameter * fix overlooked * Fix type error * Fix type error * Fix missing comma * Doc fix wrong math expression * Fixed StatScores to have zero_division * fix missing zero_division arg * fix device mismatch * use scikit-learn 1.4.0 * fix scikit-learn min ver * fix for new sklearn version * fix scikit-learn requirements * fix incorrect requirements condition * fix test code to pass in multiple sklearn versions * changelog * better docstring * add jaccardindex * fix tests * skip for old sklearn versions --------- Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Jirka <[email protected]>
What does this PR do?
I want to add zero_division option to the precision, recall, f1, fbeta metrics as well as the sklearn counterparts.
cf, https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn-metrics-precision-score
The zero_division is important when we use samplewise metrics (
multidim_average="samplewise"
) where some samples have no positive targets.The following example shows that the
preds1
andpreds2
have the same f1-scores (0.3333).But the
pred2
perfectly matches the target while thepred1
does not.This means that we could not distinguish the two models: The model can correctly predict no positive sample from the model that returns many false positives.
We can fix this by setting the zero_division=1.
For this example, the
preds2
would become (f1=1.0) while thepreds1
is still (f1=0.3333).Note:
The latest sklearn (ver 1.3) has a bug in f1_score when zero_division=1.
scikit-learn/scikit-learn#27577
So, some test cases that compare the results with the sklearn will fail until the bug is fixed.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
📚 Documentation preview 📚: https://torchmetrics--2198.org.readthedocs.build/en/2198/