You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+34Lines changed: 34 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -175,9 +175,15 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
175
175
- Enabled automatic parameters tying for TPUs ([#9525](https://github.com/PyTorchLightning/pytorch-lightning/pull/9525))
176
176
177
177
178
+
- Raise a `MisconfigurationException` when trainer functions are called with `ckpt_path="best"` but `checkpoint_callback` isn't configured ([#9841](https://github.com/PyTorchLightning/pytorch-lightning/pull/9841))
179
+
180
+
178
181
- Added support for `torch.autograd.set_detect_anomaly` through `Trainer` constructor argument `detect_anomaly` ([#9848](https://github.com/PyTorchLightning/pytorch-lightning/pull/9848))
179
182
180
183
184
+
- Added `enable_model_summary` flag to Trainer ([#9699](https://github.com/PyTorchLightning/pytorch-lightning/pull/9699))
185
+
186
+
181
187
- Added `strategy` argument to Trainer ([#8597](https://github.com/PyTorchLightning/pytorch-lightning/pull/8597))
182
188
183
189
@@ -260,6 +266,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
260
266
- Changed `HorovodPlugin.all_gather` to return a `torch.Tensor` instead of a list ([#9696](https://github.com/PyTorchLightning/pytorch-lightning/pull/9696))
261
267
262
268
269
+
- Changed Trainer connectors to be protected attributes:
- Restore `current_epoch` and `global_step` irrespective of trainer task ([#9413](https://github.com/PyTorchLightning/pytorch-lightning/pull/9413))
264
274
265
275
@@ -272,11 +282,17 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
272
282
- Update the logic to check for accumulation steps with deepspeed ([#9826](https://github.com/PyTorchLightning/pytorch-lightning/pull/9826))
273
283
274
284
285
+
- Updated error message for interactive incompatible plugins ([#9896](https://github.com/PyTorchLightning/pytorch-lightning/pull/9896))
286
+
287
+
275
288
### Deprecated
276
289
277
290
- Deprecated trainer argument `terminate_on_nan` in favour of `detect_anomaly`([#9175](https://github.com/PyTorchLightning/pytorch-lightning/pull/9175))
278
291
279
292
293
+
- Deprecated `Trainer.terminate_on_nan` public attribute access ([#9849](https://github.com/PyTorchLightning/pytorch-lightning/pull/9849))
294
+
295
+
280
296
- Deprecated `LightningModule.summarize()` in favor of `pytorch_lightning.utilities.model_summary.summarize()`
281
297
282
298
@@ -325,12 +341,18 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
325
341
- Deprecated Accelerator collective API `barrier`, `broadcast`, and `all_gather`, call `TrainingTypePlugin` collective API directly ([#9677](https://github.com/PyTorchLightning/pytorch-lightning/pull/9677))
326
342
327
343
344
+
- Deprecated `checkpoint_callback` from the `Trainer` constructor in favour of `enable_checkpointing` ([#9754](https://github.com/PyTorchLightning/pytorch-lightning/pull/9754))
345
+
346
+
328
347
- Deprecated the `LightningModule.on_post_move_to_device` method ([#9525](https://github.com/PyTorchLightning/pytorch-lightning/pull/9525))
329
348
330
349
331
350
- Deprecated `pytorch_lightning.core.decorators.parameter_validation` in favor of `pytorch_lightning.utilities.parameter_tying.set_shared_parameters` ([#9525](https://github.com/PyTorchLightning/pytorch-lightning/pull/9525))
332
351
333
352
353
+
- Deprecated passing `weights_summary` to the `Trainer` constructor in favor of adding the `ModelSummary` callback with `max_depth` directly to the list of callbacks ([#9699](https://github.com/PyTorchLightning/pytorch-lightning/pull/9699))
@@ -435,6 +457,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
435
457
- Removed a redundant warning with `ModelCheckpoint(monitor=None)` callback ([#9875](https://github.com/PyTorchLightning/pytorch-lightning/pull/9875))
436
458
437
459
460
+
- Remove `epoch` from `trainer.logged_metrics` ([#9904](https://github.com/PyTorchLightning/pytorch-lightning/pull/9904))
461
+
462
+
463
+
- Removed `should_rank_save_checkpoint` property from Trainer ([#9433](https://github.com/PyTorchLightning/pytorch-lightning/pull/9433))
464
+
465
+
438
466
### Fixed
439
467
440
468
@@ -459,6 +487,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
459
487
- Fixed `BasePredictionWriter` not returning the batch_indices in a non-distributed setting ([#9432](https://github.com/PyTorchLightning/pytorch-lightning/pull/9432))
460
488
461
489
490
+
- Fixed an error when running on in XLA environments with no TPU attached ([#9572](https://github.com/PyTorchLightning/pytorch-lightning/pull/9572))
491
+
492
+
462
493
- Fixed check on torchmetrics logged whose `compute()` output is a multielement tensor ([#9582](https://github.com/PyTorchLightning/pytorch-lightning/pull/9582))
463
494
464
495
@@ -485,6 +516,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
485
516
- Fixed missing arguments when saving hyperparameters from the parent class but not from the child class ([#9800](https://github.com/PyTorchLightning/pytorch-lightning/pull/9800))
Default path for logs and weights when no logger or
583
-
:class:`pytorch_lightning.callbacks.ModelCheckpoint` callback passed. On
584
-
certain clusters you might want to separate where logs and checkpoints are
585
-
stored. If you don't then use this argument for convenience. Paths can be local
586
-
paths or remote paths such as `s3://bucket/path` or 'hdfs://path/'. Credentials
587
-
will need to be set up to use remote filepaths.
588
-
589
-
.. testcode::
590
-
591
-
# default used by the Trainer
592
-
trainer = Trainer(default_root_dir=os.getcwd())
593
-
594
-
distributed_backend
595
-
^^^^^^^^^^^^^^^^^^^
596
-
Deprecated: This has been renamed ``accelerator``.
597
-
598
598
fast_dev_run
599
599
^^^^^^^^^^^^
600
600
@@ -1589,6 +1589,11 @@ Example::
1589
1589
weights_summary
1590
1590
^^^^^^^^^^^^^^^
1591
1591
1592
+
.. warning:: `weights_summary` is deprecated in v1.5 and will be removed in v1.7. Please pass :class:`~pytorch_lightning.callbacks.model_summary.ModelSummary`
1593
+
directly to the Trainer's ``callbacks`` argument instead. To disable the model summary,
1594
+
pass ``enable_model_summary = False`` to the Trainer.
0 commit comments