Skip to content

Conversation

@Palzer
Copy link
Contributor

@Palzer Palzer commented Nov 15, 2020

What does this PR do?

current_step should be set to the global_step instead of global_step + 1. This fixes that issue which was causing the tuner to not tune the learning rate if also tuning batch_size. The comment on the line also implies that this change should have already been made.

Fixes #4616

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In in short, see following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified; Bugfixes should be including in bug-fix release milestones (m.f.X) and features should be included in (m.X.b) releases.

Did you have fun?

Make sure you had fun coding 🙃

…ior for the smoothed loss as before the bug fix
@pep8speaks
Copy link

pep8speaks commented Nov 15, 2020

Hello @Palzer! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-03-09 00:00:55 UTC

@Palzer Palzer marked this pull request as draft November 15, 2020 17:08
@codecov
Copy link

codecov bot commented Nov 15, 2020

Codecov Report

Merging #4688 (ebc17f1) into master (9eded7f) will decrease coverage by 2%.
The diff coverage is 100%.

@@           Coverage Diff           @@
##           master   #4688    +/-   ##
=======================================
- Coverage      94%     92%    -2%     
=======================================
  Files         161     161            
  Lines       11497   11497            
=======================================
- Hits        10753   10529   -224     
- Misses        744     968   +224     

@Palzer Palzer marked this pull request as ready for review November 15, 2020 17:38
@Palzer
Copy link
Contributor Author

Palzer commented Nov 15, 2020

Not sure about the failing timing test.

@SkafteNicki
Copy link
Collaborator

@Palzer the failing test seems to be unrelated to the change. Could you add some sort of test that shows the issue is solved?

@stale
Copy link

stale bot commented Nov 30, 2020

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Nov 30, 2020
Copy link
Collaborator

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, mind add some test got this case? :]

@stale stale bot removed the won't fix This will not be worked on label Nov 30, 2020
@Borda Borda added bug Something isn't working tuner labels Nov 30, 2020
@Borda Borda added this to the 1.1 milestone Nov 30, 2020
Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch ! Mind adding a test ?

@tchaton tchaton added the waiting on author Waiting on user action, correction, or update label Dec 3, 2020
@Palzer
Copy link
Contributor Author

Palzer commented Dec 4, 2020

My laptop died a week or so ago. New one should be arriving Monday and then I can take care of tests. Sorry for the delay!

@edenlightning edenlightning removed this from the 1.1 milestone Dec 8, 2020
@edenlightning edenlightning added this to the 1.1.x milestone Dec 8, 2020
@mergify mergify bot requested a review from a team December 12, 2020 14:59
@Borda Borda changed the base branch from master to release/1.2-dev February 8, 2021 12:09
@Borda Borda requested a review from SkafteNicki as a code owner February 8, 2021 12:09
@Borda
Copy link
Collaborator

Borda commented Feb 8, 2021

@Palzer there won't be any more releases in 1.1.x so I have changed your destination branch to feat 1.2
pls, there is no need to close this PR, just rebase on the target branch, or maybe easier just cherry-pick the commit (@kaushikb11 will help you if needed)

@Borda Borda modified the milestones: 1.1.x, 1.2 Feb 8, 2021
@edenlightning edenlightning removed this from the 1.2 milestone Feb 9, 2021
Base automatically changed from release/1.2-dev to master February 11, 2021 14:31
@stale
Copy link

stale bot commented Feb 27, 2021

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/generated/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Feb 27, 2021
@stale stale bot removed the won't fix This will not be worked on label Mar 1, 2021
Copy link
Contributor

@carmocca carmocca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

self.beta**(current_step + 1) avoids division by 0

IMO we don't really need a test for this

@s-rog s-rog merged commit 523c59b into Lightning-AI:master Mar 9, 2021
@s-rog s-rog mentioned this pull request Mar 9, 2021
11 tasks
@carmocca carmocca mentioned this pull request Mar 15, 2021
SeanNaren pushed a commit that referenced this pull request Mar 16, 2021
)

* fixed bug where tuner would not tune lr if also tuning batch_size

* added a '+1' to computing the smoothed loss. This maintains the behavior for the smoothed loss as before the bug fix

* pep8 fix

* add changelog

Co-authored-by: chaton <[email protected]>
Co-authored-by: Carlos Mocholi <[email protected]>
Co-authored-by: Adrian Wälchli <[email protected]>
(cherry picked from commit 523c59b)
lexierule pushed a commit that referenced this pull request Mar 16, 2021
)

* fixed bug where tuner would not tune lr if also tuning batch_size

* added a '+1' to computing the smoothed loss. This maintains the behavior for the smoothed loss as before the bug fix

* pep8 fix

* add changelog

Co-authored-by: chaton <[email protected]>
Co-authored-by: Carlos Mocholi <[email protected]>
Co-authored-by: Adrian Wälchli <[email protected]>
(cherry picked from commit 523c59b)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working tuner waiting on author Waiting on user action, correction, or update

Projects

None yet

Development

Successfully merging this pull request may close these issues.

lr_tuner fails if called after tuning batch_size

9 participants