-
Notifications
You must be signed in to change notification settings - Fork 617
Description
Describe the bug
The tests for FBeta and F1 both use a softmax function with just one output.
model.add(layers.Dense(1, activation='softmax')) addons/tensorflow_addons/metrics/f1_test.py
Line 116 in 7624954
model.add(layers.Dense(1, activation='softmax'))
The effect is that the output (prediction) is always 1. The right activation function for a binary calssification (with just one output is sigmoid). But this is not the only problem. This bug in the test hides an even worse other bug.
The other bug is that F1 and FBeta both can not handle values other then 1 (1.0) and 0 (0.0) as the predicted result. But the predictions of a binary classification (with simoid) and multi class classification with softmax is always somewhere between 1 and 0 (one hot encoded for multi class). The result of this bug is that this implementation of F1 and FBeta always return 0.0 when the predicted results are not exactly 0 (0.0) or 1 (1.0) - which is not realistic.
Also all other tests of F1 and FBeta have values of 0 or 1 as the predicted results. This does not reflect reality.
preds = tf.constant([[0, 0, 1], [1, 1, 0], [1, 1, 1]], dtype=tf.int32) preds = tf.constant([[0, 0, 1], [1, 1, 0], [1, 1, 1]], dtype=tf.int32)
Code to reproduce the issue
Just change softmax to sigmoid in the tests and change verbose=1 and see the results.