-
Notifications
You must be signed in to change notification settings - Fork 3.6k
deprecate passing ModelCheckpoint instance to Trainer(checkpoint_callback=...) #4336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
8d87170
d764359
4442537
a0c7e68
7318e87
f8a0b7e
923b3e1
66c3406
80fd987
78693cc
e1271e1
616fa06
c2235b9
d2f8791
fd02011
c507898
d680605
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -85,7 +85,7 @@ class Trainer( | |
| def __init__( | ||
| self, | ||
| logger: Union[LightningLoggerBase, Iterable[LightningLoggerBase], bool] = True, | ||
| checkpoint_callback: Union[ModelCheckpoint, bool] = True, | ||
| checkpoint_callback: bool = True, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
What about keeping the
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
doesn't solve the problem I'm trying to solve here, which is eliminate ambiguity when restoring the state of trainer. see answer of 2nd FAQ question.
with this PR proposal, the value will be ignored if you pass in a custom one. False is only needed when you want to disable checkpointing completely. I believe I have this covered in a test.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i agree with @carmocca , this is super confusing when adding my own checkpoint callback. given how loose the default checkpoint callback is, and with the coming customizations, I'd rather drop the I also think that's a nice message for users: "See how extensible this framework is" vs "look at all the magic this trainer configures for you which you can't change" Even if that's not in this PR, it feels inevitable that
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes fine with me, I don't have strong preference here. It looks like a lot of api change, but it is really more a bugfix. |
||
| callbacks: Optional[List[Callback]] = None, | ||
| default_root_dir: Optional[str] = None, | ||
| gradient_clip_val: float = 0, | ||
|
|
@@ -169,7 +169,12 @@ def __init__( | |
|
|
||
| callbacks: Add a list of callbacks. | ||
|
|
||
| checkpoint_callback: Callback for checkpointing. | ||
| checkpoint_callback: If ``True``, enable checkpointing. | ||
| It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint in | ||
| :paramref:`~pytorch_lightning.trainer.trainer.Trainer.callbacks`. Default: ``True``. | ||
|
|
||
| .. warning:: Passing a ModelCheckpoint instance to this argument is deprecated since | ||
| v1.1.0 and will be unsupported from v1.4.0. | ||
|
|
||
| check_val_every_n_epoch: Check val every n train epochs. | ||
|
|
||
|
|
@@ -297,7 +302,6 @@ def __init__( | |
|
|
||
| # init callbacks | ||
| # Declare attributes to be set in callback_connector on_trainer_init | ||
| self.checkpoint_callback: Union[ModelCheckpoint, bool] = checkpoint_callback | ||
| self.callback_connector.on_trainer_init( | ||
| callbacks, | ||
| checkpoint_callback, | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -144,7 +144,6 @@ def __scale_batch_reset_params(trainer, model, steps_per_trial): | |
| trainer.weights_summary = None # not needed before full run | ||
| trainer.logger = DummyLogger() | ||
| trainer.callbacks = [] # not needed before full run | ||
| trainer.checkpoint_callback = False # required for saving | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I removed these from Tuner because ModelCheckpoint now entirely lives in callbacks list, and this is properly backed up by Tuner already.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. cc @SkafteNicki |
||
| trainer.limit_train_batches = 1.0 | ||
| trainer.optimizers, trainer.schedulers = [], [] # required for saving | ||
| trainer.model = model # required for saving | ||
|
|
@@ -157,7 +156,6 @@ def __scale_batch_restore_params(trainer): | |
| trainer.weights_summary = trainer.__dumped_params['weights_summary'] | ||
| trainer.logger = trainer.__dumped_params['logger'] | ||
| trainer.callbacks = trainer.__dumped_params['callbacks'] | ||
| trainer.checkpoint_callback = trainer.__dumped_params['checkpoint_callback'] | ||
| trainer.auto_scale_batch_size = trainer.__dumped_params['auto_scale_batch_size'] | ||
| trainer.limit_train_batches = trainer.__dumped_params['limit_train_batches'] | ||
| trainer.model = trainer.__dumped_params['model'] | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.