Skip to content

Conversation

@HansBambel
Copy link
Contributor

The order of the learning rate updates when run_training_teardown() was pending was restricting the use of a learning rate scheduler monitoring a metric that only exists at the end of a validation loop.

Due to @SkafteNicki's suggestion I changed the order of the self.update_learning_rates() and self.run_training_teardown() in the training_loop.py.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

What does this PR do?

Fixes #1889

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@mergify mergify bot requested a review from a team May 19, 2020 15:09
@williamFalcon williamFalcon merged commit 3459a54 into Lightning-AI:master May 19, 2020
@HansBambel HansBambel deleted the bugfix/1889_fix_scale_batch_size branch May 19, 2020 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

trainer.scale_batch_size() throws exception due to LRScheduler

2 participants