Skip to content

Problem with using Tensorflow addons' metrics correctly in functional API #818

@JoBerkner

Description

@JoBerkner

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 / Google Colab
  • TensorFlow version and how it was installed (source or binary): 2.1.0-rc1 (pip install)
  • TensorFlow-Addons version and how it was installed (source or binary): 0.6.0 (pip install)
  • Python version: 3.6.9
  • Is GPU used? (yes/no):

Describe the bug

I have an LSTM model to perform binary classification of human activities using multivariate smartphone sensor data. The two classes are imbalanced (1:50). Therefore I would like to use F1-score as a metric, which is why I came across the TensorFlow Addons.

I now have a problem to apply this score to my functional API.
If I use another value for the metric argument average (e.g., average=None or average="macro") then I get an error message when fitting the model:

ValueError: Dimension 0 in both shapes must be equal, but are 2 and 1. Shapes are [ 2 ] and [ 1 ]. for 'AssignAddVariableOp' (op: 'AssignAddVariableOp') with input shapes: [ ], [ 1 ].

And if I use the value average="micro" I am not getting the error, but the F1-score is 0 throughout the learning process, while my loss decreases.

I believe I am still doing something wrong here. Can anybody provide an explanation for me?

Code to reproduce the issue

import tensorflow as tf 
import tensorflow_addons as tfa
from tensorflow import kerasdef

create_model(n_neurons=150, learning_rate=0.01, activation="relu", loss="binary_crossentropy"):

   #create input layer and assign to current output layer
   input_ = keras.layers.Input(shape=(X_train.shape[1],X_train.shape[2])) 

   #add LSTM layer
   lstm = keras.layers.LSTM(n_neurons, activation=activation)(input_)

   #Output Layer
   output = keras.layers.Dense(1, activation="sigmoid")(lstm)

   #Create Model
   model = keras.models.Model(inputs=[input_], outputs=[output])

   #Add optimizer
   optimizer=keras.optimizers.SGD(lr=learning_rate, clipvalue=0.5)

   #Compile model
   model.compile(loss=loss, optimizer=optimizer, metrics=[tfa.metrics.F1Score(num_classes=2, average="micro")])

   print(model.summary())

   return model

#Create the model
model = create_model()

#fit the model
history = model.fit(X_train,y_train, 
                epochs=300, 
                validation_data=(X_val, y_val))

Other info / logs

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingmetrics

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions