Skip to content

Conversation

SSaishruthi
Copy link
Contributor

This PR closes #490

Changes:

  1. Threshold parameter has been added to both F1 and F-Beta score.
  2. Using the threshold parameter output is changed to 0 and 1.

@PhilipMay
Copy link
Contributor

PhilipMay commented Sep 11, 2019

@SSaishruthi I do not think that this implementation with threshold fixes the problem.
Please see my comment here: #490 (comment)

@SSaishruthi
Copy link
Contributor Author

Left a comment in the issue.

@SSaishruthi
Copy link
Contributor Author

SSaishruthi commented Sep 11, 2019

@PhilipMay

I am taking binary accuracy as a reference here: https://github.com/tensorflow/tensorflow/blob/2ff39d00faf8f7e433ddcae0aa278f6e573b0c55/tensorflow/python/keras/metrics.py#L630

In order for this to work you need to provide the threshold value as 0.49 above. I can probably fix the threshold default to be 0.5. I think below 0.5, it may not be considered as a good prediction.

I have this tested with scikit learn as well.https://colab.research.google.com/drive/1qSq0SsYkPqjdKUgM1RM4kKM67X75ocFj

Goal here is to make it compatible with both multi-class and multi-label

@Squadrick
Copy link
Member

I'm closing this issue, I've created a new PR #502 that has threshold as well as some refactoring to f_scores and f_tests.

@Squadrick Squadrick closed this Sep 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug in F1 and FBeta implementation and test

4 participants