Skip to content

Continuing training when using learning rate schedulers #5486

@KirillShmilovich

Description

@KirillShmilovich

❓ Questions and Help

When restarting training on a model using learning rate scheduler, it seems like the original learning rate is used rather than the scheduler-update learning rate.

Code

For example, a model with the following configure_optimizers:

def configure_optimizers(self):
        optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
        scheduler = optim.lr_scheduler.ExponentialLR(
            optimizer, gamma=self.hparams.learning_gamma
        )
        return [optimizer], [scheduler]

With learning_gamma != 1.0, when restarting training, e.g.:

model = myModel.load_from_checkpoint(ckpt_fname)
lr_monitor = LearningRateMonitor(logging_interval="epoch")
trainer = Trainer(resume_from_checkpoint=ckpt_fname, callbacks=[lr_monitor])
trainer.fit(model)

The logged learning rate is equal to the original initial learning rate, rather than the schedule-updated learning rate.

What's your environment?

  • OS: Linux
  • Packaging: conda
  • Version: 1.1.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions