Skip to content

Float Object is not Callable when calling scheduler.step() #13675

@lsaeuro

Description

@lsaeuro

🐛 Bug

I have initialized an optimizer and a scheduler like this:

def configure_optimizers(self): 
      opt = torch.optim.Adam(self.model.parameters(), lr=cfg.learning_rate)
     sch = torch.optim.lr_scheduler.MultiplicativeLR(opt, lr_lambda = 0.95) #decrease of 5% every epoch        
      return [opt], [sch]

Since I just want to updte the scheduler after each epoch, I did not modify this updating method in the training phase, but this is the error I get after the first epoch:

self.update_lr_schedulers("epoch", update_plateau_schedulers=False)
File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 448, in update_lr_schedulers
opt_indices=[opt_idx for opt_idx, _ in active_optimizers],
File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 509, in _update_learning_rates
lr_scheduler["scheduler"].step()
File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 152, in step
values = self.get_lr()
File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 329, in get_lr
for lmbda, group in zip(self.lr_lambdas, self.optimizer.param_groups)]
File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 329, in
for lmbda, group in zip(self.lr_lambdas, self.optimizer.param_groups)]
TypeError: 'float' object is not callable

Expected behavior

This error should not be there, in fact, using another scheduler ( in particular this
sch = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=10) ) I do not get the error and the training proceedes smoothly.

To Reproduce

import torch
from torchvision.models import resnet18
net = resnet18()
optimizer = torch.optim.Adam(net.parameters(), lr=0.01) 
scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95) # BUG SCHEDULER 
#scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 3, gamma=0.1) # WORKING ONE
for i in range(10):
    print(i, scheduler.get_lr())
    scheduler.step()

Environment

  • CUDA:
    - GPU:
    - NVIDIA RTX A6000
    - available: True
    - version: 11.3
  • Packages:
    - numpy: 1.19.2
    - pyTorch_debug: False
    - pyTorch_version: 1.10.2
    - pytorch-lightning: 1.5.0
    - tqdm: 4.64.0
  • System:
    - OS: Linux
    - architecture:
    - 64bit
    -
    - processor: x86_64
    - python: 3.6.13
    - version: Extend CI #44~20.04.1-Ubuntu SMP Fri Jun 24 13:27:29 UTC 2022

cc @rohitgr7 @akihironitta

Metadata

Metadata

Assignees

No one assigned

    Labels

    3rd partyRelated to a 3rd-partylr schedulerplGeneric label for PyTorch Lightning packagequestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions