Skip to content

Conversation

@Borda
Copy link
Collaborator

@Borda Borda commented Jan 20, 2021

What does this PR do?

moving attributes to the main class as they won't be deprecated

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Check that target branch and milestone match!

Did you have fun?

Make sure you had fun coding 🙃

@Borda Borda added the refactor label Jan 20, 2021
@Borda Borda added this to the 1.2 milestone Jan 20, 2021
@Borda Borda requested a review from tchaton January 20, 2021 13:53
Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ! Can you add it for tuning, evaluating ?

@Borda
Copy link
Collaborator Author

Borda commented Jan 20, 2021

LGTM ! Can you add it for tuning, evaluating ?

done, in fact as the new API stays, I moved it to Trainer

@Borda Borda marked this pull request as ready for review January 20, 2021 14:02
@Borda Borda added the design Includes a design discussion label Jan 20, 2021
@Borda Borda marked this pull request as draft January 20, 2021 16:38
@Borda Borda self-assigned this Jan 20, 2021
@Borda
Copy link
Collaborator Author

Borda commented Jan 20, 2021

Well not sure why these three tests are failing, any suggestion? @awaelchli

FAILED tests/checkpointing/test_model_checkpoint.py::test_checkpoint_repeated_strategy_extended[False]
FAILED tests/checkpointing/test_model_checkpoint.py::test_checkpoint_repeated_strategy_extended[True]
FAILED tests/loggers/test_csv.py::test_file_logger_log_hyperparams - yaml.con...

@Borda Borda marked this pull request as ready for review January 20, 2021 22:42
@codecov
Copy link

codecov bot commented Jan 20, 2021

Codecov Report

Merging #5589 (30f6795) into release/1.2-dev (671887f) will decrease coverage by 0%.
The diff coverage is 58%.

@@               Coverage Diff               @@
##           release/1.2-dev   #5589   +/-   ##
===============================================
- Coverage               93%     92%   -0%     
===============================================
  Files                  153     153           
  Lines                10815   10834   +19     
===============================================
- Hits                 10006   10005    -1     
- Misses                 809     829   +20     

@carmocca
Copy link
Contributor

I don't like the set_* functions. Users might think this changes the Trainer state (as in, to switch from training to evaluation) when it only changes a property.

@Borda
Copy link
Collaborator Author

Borda commented Jan 21, 2021

I don't like the set_* functions. Users might think this changes the Trainer state (as in, to switch from training to evaluation) when it only changes a property.

Ok, so what else do you suggest?

@carmocca
Copy link
Contributor

@is_testing.setter
def is_testing(self, value: bool):
    if value:
        self._running_stage = RunningStage.TESTING

@Borda
Copy link
Collaborator Author

Borda commented Jan 21, 2021

@is_testing.setter
def is_testing(self, value: bool):
    if value:
        self._running_stage = RunningStage.TESTING

Yes, that is standard setter, but with this one is unclear what it does if you pass something else then true/1
We want something which does only one thing...

Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

@Borda Borda added the ready PRs ready to be merged label Jan 21, 2021
@Borda Borda enabled auto-merge (squash) January 21, 2021 08:56
@awaelchli
Copy link
Contributor

You did not explain in the PR description what the motivation behind this renaming is. It is not clear to me why it is necessary. When no issue is linked it would be very helpful to do that as some reviewers were not part of offline discussions.
And also pytorch has a different format, using setters, like so: model.training = True or model.training(True).

@Borda Borda added discussion In a discussion stage and removed ready PRs ready to be merged labels Jan 21, 2021
@Borda
Copy link
Collaborator Author

Borda commented Jan 21, 2021

ok, so @awaelchli suggests keeping it as attributes, correct?
my motivation was to clearly distinguish that it is a state attribute, but you have good point with mapping PT attributes

@awaelchli
Copy link
Contributor

I guess my main concern was just that you are introducing two names, is_testing and set_testing, while in python we do not often see this kind of getter and setter naming. the convention is to have the same name for the setter and getter (typically). Please correct me if I'm wrong. It's not a strong opinion, but I see some benefits having closer naming to pytorch :)

@Borda
Copy link
Collaborator Author

Borda commented Jan 21, 2021

@awaelchli I gree that introducing new attributes is not a good way :]
one point/question you have referred that nn.Module has training/testing but won't be rather confusing to have the same also for the trainer?

@tchaton @PyTorchLightning/core-contributors what is your point of view?

@awaelchli
Copy link
Contributor

so far I have not experienced or seen complaints about confusion between model and trainer attributes. But this is mainly because nn.Module only has a .training attribute (no testing) and on the other hand our Trainer has only a .testing attribute and no training. So, it could be seen as confusing from different perspectives 😆

@Borda
Copy link
Collaborator Author

Borda commented Jan 21, 2021

so far I have not experienced or seen complaints about confusion between model and trainer attributes. But this is mainly because nn.Module only has a .training attribute (no testing) and on the other hand our Trainer has only a .testing attribute and no training. So, it could be seen as confusing from different perspectives 😆

ok, I have reverted the rename and just moved the properties to the trainer as they won't be deprecated

@Borda Borda added the ready PRs ready to be merged label Jan 23, 2021
Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

@Borda Borda merged commit 6386f45 into release/1.2-dev Jan 24, 2021
@Borda Borda deleted the refactor/trainer-is_ branch January 24, 2021 10:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

design Includes a design discussion discussion In a discussion stage ready PRs ready to be merged refactor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants