Skip to content

Conversation

@SeanNaren
Copy link
Contributor

@SeanNaren SeanNaren commented Feb 25, 2021

What does this PR do?

Fixes #6194

We recently modified the behaviour of the early stopping callback in the accelerator refactor, this led to the bug mentioned above. This was due to defaulting to False, when other callbacks could've updated this value to True.

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

@SeanNaren SeanNaren added bug Something isn't working priority: 0 High priority task callback labels Feb 25, 2021
@SeanNaren SeanNaren added this to the 1.2.x milestone Feb 25, 2021
@SeanNaren SeanNaren self-assigned this Feb 25, 2021
@codecov
Copy link

codecov bot commented Feb 25, 2021

Codecov Report

Merging #6197 (b8e063b) into master (3ed8ef8) will decrease coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff           @@
##           master   #6197   +/-   ##
======================================
- Coverage      93%     93%   -0%     
======================================
  Files         159     159           
  Lines       11378   11375    -3     
======================================
- Hits        10623   10591   -32     
- Misses        755     784   +29     

Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great fix !

Comment on lines +362 to +363
def on_train_end(self) -> None:
assert self.trainer.current_epoch == self.expected_end_epoch, 'Early Stopping Failed'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would drop this and rather check in the test trainer epoch is as expected, so there is not random inference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what I originally did but because of how DDP Spawn works, the local trainer's current epoch doesn't seem to be kept in sync which is fair (since it's only kept in sync during trainer on the processes). This is why I had to move it to on_train_end because this happens within the spawn process!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so can we have both?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could but I'd need to separate out the tests, I don't think its really worth it because it would be a lot of duplication

@SeanNaren SeanNaren merged commit dd2f5a0 into master Feb 25, 2021
@SeanNaren SeanNaren deleted the fix/multi_early_stopping branch February 25, 2021 15:44
@carmocca carmocca mentioned this pull request Feb 25, 2021
kaushikb11 pushed a commit to kaushikb11/pytorch-lightning that referenced this pull request Mar 2, 2021
* Fix for multiple callbacks

* Add CHANGELOG.md

* Remove old params

* Skip tests on windows using ddp

* Change name of the variable to not clash with should stop, which is separate

* Apply suggestions from code review

* Fix params

Co-authored-by: Jirka Borovec <[email protected]>
kaushikb11 pushed a commit to kaushikb11/pytorch-lightning that referenced this pull request Mar 2, 2021
* Fix for multiple callbacks

* Add CHANGELOG.md

* Remove old params

* Skip tests on windows using ddp

* Change name of the variable to not clash with should stop, which is separate

* Apply suggestions from code review

* Fix params

Co-authored-by: Jirka Borovec <[email protected]>
lexierule pushed a commit that referenced this pull request Mar 5, 2021
* Fix for multiple callbacks

* Add CHANGELOG.md

* Remove old params

* Skip tests on windows using ddp

* Change name of the variable to not clash with should stop, which is separate

* Apply suggestions from code review

* Fix params

Co-authored-by: Jirka Borovec <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working callback priority: 0 High priority task

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Latest Lightning does not support multiple callbacks that stop

5 participants