Skip to content

WeightNormalization with RNNs: shape issue #698

@lntsmn

Description

@lntsmn

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
  • TensorFlow version and how it was installed (source or binary): 2.0.0 binary
  • TensorFlow-Addons version and how it was installed (source or binary): 0.6.0 binary
  • Python version: 3.6.8
  • Is GPU used? (yes/no): yes

Describe the bug

WeightNormalization layer wrapper cannot be used with RNNs if the input sequence has undetermined length. See code for errors.

Code to reproduce the issue

import tensorflow as tf
import tensorflow_addons as tfa

n_features = 3
seq_length = None
rnn_units = 4

input_layer = tf.keras.layers.Input(shape=(seq_length, n_features))
rnn_layer = tf.keras.layers.SimpleRNN(rnn_units)
dense_layer = tf.keras.layers.Dense(1)
wn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)
wn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))

yields

ValueError: as_list() is not defined on an unknown TensorShape.

Note that:

  1. The same code without using WeightNormalization runs.
  2. Interestingly, adding the lines
batch_size = 1
input_layer = tf.keras.layers.Input(batch_shape=(batch_size, seq_length, n_features))
rnn_layer = tf.keras.layers.SimpleRNN(rnn_units, return_sequences=True)
dense_layer = tf.keras.layers.Dense(1)
wn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)
wn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))

gives

IndexError: list assignment index out of range

instead.

Other info / logs

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinglayers

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions