-
Notifications
You must be signed in to change notification settings - Fork 617
Description
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab (Linux)
- TensorFlow version and how it was installed (source or binary): 2.4.1
- TensorFlow-Addons version and how it was installed (source or binary): 0.12.1
- Python version: 3.7
- Is GPU used? (yes/no): yes
Describe the bug
when i apply the mix precision with Radam, the variable total_steps dtype is int32, that's not compatible with float32 in the checkpoint. maybe there is float dtype with input can solve it.
or you have a better way to help me ? thank you
Code to reproduce the issue
optimizers
learning_rate_fn = keras.experimental.CosineDecay(
initial_learning_rate=2e-3,
decay_steps=1000
)
radam = tfa.optimizers.RectifiedAdam(
learning_rate=learning_rate_fn,
total_steps=1000,
warmup_proportion=0.02,
min_lr=5e-5)
ranger = tfa.optimizers.Lookahead(radam, sync_period=6, slow_step_size=0.5)
Other info / logs
dtype_from_key
......
'optimizer/iter/.ATTRIBUTES/VARIABLE_VALUE': tf.int64,
'optimizer/lh_base_optimizer/beta_1/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/beta_2/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/decay/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/min_lr/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/sma_threshold/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/total_steps/.ATTRIBUTES/VARIABLE_VALUE': tf.int32,
'optimizer/lh_base_optimizer/warmup_proportion/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/lh_base_optimizer/weight_decay/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/loss_scale/current_loss_scale/.ATTRIBUTES/VARIABLE_VALUE': tf.float32,
'optimizer/loss_scale/good_steps/.ATTRIBUTES/VARIABLE_VALUE': tf.int64}