Skip to content

Commit 08f13d9

Browse files
authored
Merge 511945b into 07f24d2
2 parents 07f24d2 + 511945b commit 08f13d9

File tree

2 files changed

+13
-28
lines changed

2 files changed

+13
-28
lines changed

CHANGELOG.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,25 +9,25 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
99

1010
### Added
1111

12-
- Add support for summarized model total params size in megabytes ([#5590](https://github.com/PyTorchLightning/pytorch-lightning/pull/5590))
12+
- Added support for summarized model total params size in megabytes ([#5590](https://github.com/PyTorchLightning/pytorch-lightning/pull/5590))
1313

1414

15-
- Add Support for multiple train loaders ([#1959](https://github.com/PyTorchLightning/pytorch-lightning/pull/1959))
15+
- Added support for multiple train loaders ([#1959](https://github.com/PyTorchLightning/pytorch-lightning/pull/1959))
1616

1717

18-
- `Accuracy` metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using the `top_k` parameter ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
18+
- Added `Accuracy` metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using the `top_k` parameter ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
1919

2020

21-
- `Accuracy` metric now enables the computation of subset accuracy for multi-label or multi-dimensional multi-class inputs with the `subset_accuracy` parameter ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
21+
- Added `Accuracy` metric now enables the computation of subset accuracy for multi-label or multi-dimensional multi-class inputs with the `subset_accuracy` parameter ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
2222

2323

24-
- `HammingDistance` metric to compute the hamming distance (loss) ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
24+
- Added `HammingDistance` metric to compute the hamming distance (loss) ([#4838](https://github.com/PyTorchLightning/pytorch-lightning/pull/4838))
2525

2626

2727
- Added `max_fpr` parameter to `auroc` metric for computing partial auroc metric ([#3790](https://github.com/PyTorchLightning/pytorch-lightning/pull/3790))
2828

2929

30-
- `StatScores` metric to compute the number of true positives, false positives, true negatives and false negatives ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))
30+
- Added `StatScores` metric to compute the number of true positives, false positives, true negatives and false negatives ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))
3131

3232

3333
- Added `R2Score` metric ([#5241](https://github.com/PyTorchLightning/pytorch-lightning/pull/5241))
@@ -109,7 +109,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
109109

110110
### Deprecated
111111

112-
- `stat_scores_multiple_classes` is deprecated in favor of `stat_scores` ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))
112+
- Function `stat_scores_multiple_classes` is deprecated in favor of `stat_scores` ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))
113113

114114

115115
- Moved accelerators and plugins to its `legacy` pkg ([#5645](https://github.com/PyTorchLightning/pytorch-lightning/pull/5645))
@@ -129,6 +129,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
129129
- Removed deprecated `EvalResult` ([#5633](https://github.com/PyTorchLightning/pytorch-lightning/pull/5633))
130130

131131

132+
- Removed `LoggerStages` ([#5673](https://github.com/PyTorchLightning/pytorch-lightning/pull/5673))
133+
134+
132135
### Fixed
133136

134137
- Fixed distributed setting and `ddp_cpu` only with `num_processes>1` ([#5297](https://github.com/PyTorchLightning/pytorch-lightning/pull/5297))
@@ -145,6 +148,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
145148

146149
- Fixed loading yaml ([#5619](https://github.com/PyTorchLightning/pytorch-lightning/pull/5619))
147150

151+
152+
148153
## [1.1.4] - YYYY-MM-DD
149154

150155
### Added

pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py

Lines changed: 1 addition & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
from collections import defaultdict
15-
from typing import Any, Dict, List, Optional, Union
15+
from typing import Any, Dict, List, Optional
1616

1717
import torch
1818

@@ -21,26 +21,6 @@
2121
from pytorch_lightning.utilities import DistributedType, LightningEnum
2222

2323

24-
class LoggerStages(LightningEnum):
25-
""" Train/validation/test phase in each training step.
26-
>>> # you can math the type with string
27-
>>> LoggerStages.TRAIN == 'train'
28-
True
29-
"""
30-
TRAIN = "train"
31-
VAL = "validation"
32-
TEST = "test"
33-
34-
@staticmethod
35-
def determine_stage(stage_or_testing: Union[str, bool]) -> 'LoggerStages':
36-
if isinstance(stage_or_testing, str) and stage_or_testing in list(LoggerStages):
37-
return LoggerStages(stage_or_testing)
38-
if isinstance(stage_or_testing, (bool, int)):
39-
# stage_or_testing is trainer.testing
40-
return LoggerStages.TEST if bool(stage_or_testing) else LoggerStages.VAL
41-
raise RuntimeError(f"Invalid stage {stage_or_testing} of type {type(stage_or_testing)} given")
42-
43-
4424
class ResultStoreType(LightningEnum):
4525
INSIDE_BATCH_TRAIN_LOOP = "inside_batch_train_loop"
4626
OUTSIDE_BATCH_TRAIN_LOOP = "outside_batch_train_loop"

0 commit comments

Comments
 (0)