-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Classification metrics overhaul: precision & recall (4/n) #4842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classification metrics overhaul: precision & recall (4/n) #4842
Conversation
|
Hello @tadejsv! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-01-17 18:02:19 UTC |
SkafteNicki
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More comments...
…orch-lightning into cls_metrics_precision_recall
…ics_precision_recall
SkafteNicki
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM :]
| assert get_num_classes(pred, target, num_classes) == expected_num_classes | ||
|
|
||
|
|
||
| @pytest.mark.parametrize(['pred', 'target', 'expected_tp', 'expected_fp', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did we remove those tests ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am removing tests for deprecated functions. This was also done in other PRs before, see #4704
…orch-lightning into cls_metrics_precision_recall
rohitgr7
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Just some comments.
teddykoker
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work :)
Borda
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some minor comments, but otherwise it LGTM ;]
Co-authored-by: Jirka Borovec <[email protected]>
…ics_precision_recall
This PR is a spin-off from #4835.
What does this PR do?
Recall, Precision
These are all metrics that can be represented as a (quotient) function of "stat scores" - thanks to subclassing
StatScorestheir code is extremely simple. Here are the parameters common to all of them:average: this builds on thereduceparameter in StatScores. The options here (micro,macro,weighted,noneor None,samples) are exactly equivalent to the sklearn counterparts, so I won't go into details.mdmc_average: builds on themdmc_reducefrom StatScores. This decides how to average scores for multi-dimensional multi-class inputs. Already discussed inmdmc_reduce.Both also get the
top_kparameter, enabling their use as Recall@K and Precision@K - very useful for information retrieval.Deprecations
I have deprecated
precision_recallmetric, as well as the oldprecisionandrecall(in case someone was importing them using the full path, otherwise they are replaced by the newprecisionandrecall)