You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+70-1Lines changed: 70 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,50 @@ All notable changes to this project will be documented in this file.
4
4
5
5
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
6
6
7
+
## [1.4.0] - 2021-MM-DD
8
+
9
+
### Added
10
+
11
+
- Added support to `LightningModule.to_torchscript` for saving to custom filesystems with fsspec ([#7617](https://github.com/PyTorchLightning/pytorch-lightning/pull/7617))
12
+
13
+
14
+
- Added `KubeflowEnvironment` for use with the `PyTorchJob` operator in Kubeflow
15
+
16
+
17
+
- Added LightningCLI support for config files on object stores ([#7521](https://github.com/PyTorchLightning/pytorch-lightning/pull/7521))
18
+
19
+
20
+
- Added `ModelPruning(prune_on_train_epoch_end=True|False)` to choose when to apply pruning ([#7704](https://github.com/PyTorchLightning/pytorch-lightning/pull/7704))
21
+
22
+
23
+
- Added support for checkpointing based on a provided time interval during training ([#7515](https://github.com/PyTorchLightning/pytorch-lightning/pull/7515))
- Added `clip_grad_by_value` support for TPUs ([#7025](https://github.com/PyTorchLightning/pytorch-lightning/pull/7025))
35
+
36
+
37
+
- Added `sub_dir` parameter to `TensorBoardLogger` ([#6195](https://github.com/PyTorchLightning/pytorch-lightning/pull/6195))
38
+
39
+
40
+
- Added correct `dataloader_idx` to batch transfer hooks ([#6241](https://github.com/PyTorchLightning/pytorch-lightning/pull/6241))
41
+
42
+
43
+
- Added `ddp_fully_sharded` support ([#7487](https://github.com/PyTorchLightning/pytorch-lightning/pull/7487))
44
+
45
+
46
+
- Added `__len__` to `IndexBatchSamplerWrapper` ([#7681](https://github.com/PyTorchLightning/pytorch-lightning/pull/7681))
47
+
48
+
49
+
- Added `should_rank_save_checkpoint` property to Training Plugins ([#7684](https://github.com/PyTorchLightning/pytorch-lightning/pull/7684))
7
50
8
-
## [1.3.3] - 2021-05-27
9
51
10
52
### Changed
11
53
@@ -21,6 +63,33 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
21
63
- Fixed training loop total batch counter when accumulate grad batches was enabled ([#7692](https://github.com/PyTorchLightning/pytorch-lightning/pull/7692))
22
64
23
65
66
+
## [1.3.4] - 2021-06-01
67
+
68
+
### Changed
69
+
70
+
- Update pre-commit and add new hooks ([#7781](https://github.com/PyTorchLightning/pytorch-lightning/pull/7781))
71
+
72
+
73
+
### Fixed
74
+
75
+
76
+
## [1.3.3] - 2021-05-27
77
+
78
+
79
+
### Changed
80
+
81
+
- Move parameter validation specific to TPU Training plugins ([#7415](https://github.com/PyTorchLightning/pytorch-lightning/pull/7415))
82
+
- Override broadcast_object_list for torch<1.8 ([#7592](https://github.com/PyTorchLightning/pytorch-lightning/pull/7592))
83
+
- Clear predict_progress_bar in `__getstate__` from ProgressBar ([#7608](https://github.com/PyTorchLightning/pytorch-lightning/pull/7608))
84
+
85
+
### Fixed
86
+
87
+
- Increment the total batch idx before the accumulation early exit ([#7692](https://github.com/PyTorchLightning/pytorch-lightning/pull/7692))
88
+
- Fix global step update when the epoch is skipped ([#7677](https://github.com/PyTorchLightning/pytorch-lightning/pull/7677))
89
+
- Fix progress bar print error when called before training ([#7674](https://github.com/PyTorchLightning/pytorch-lightning/pull/7674))
90
+
- Fix dataloaders are not reset when tuning the model ([#7566](https://github.com/PyTorchLightning/pytorch-lightning/pull/7566))
0 commit comments