Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated `DataModule` properties: `train_transforms`, `val_transforms`, `test_transforms`, `size`, `dims` ([#8851](https://github.com/PyTorchLightning/pytorch-lightning/pull/8851))


- Deprecated `prepare_data_per_node` flag on Trainer and set it as a property of `DataHooks`, accessible in the `LightningModule` and `LightningDataModule` [#8958](https://github.com/PyTorchLightning/pytorch-lightning/pull/8958)
- Deprecated `prepare_data_per_node` flag on Trainer and set it as a property of `DataHooks`, accessible in the `LightningModule` and `LightningDataModule` ([#8958](https://github.com/PyTorchLightning/pytorch-lightning/pull/8958))


- Deprecated `log_gpu_memory` flag on the Trainer in favor of passing the `GPUStatsMonitor` callback to the Trainer ([#9124](https://github.com/PyTorchLightning/pytorch-lightning/pull/9124/))


### Removed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,17 @@
from pytorch_lightning.utilities.apply_func import apply_to_collection, move_data_to_device
from pytorch_lightning.utilities.metrics import metrics_to_scalars
from pytorch_lightning.utilities.types import _EVALUATE_OUTPUT
from pytorch_lightning.utilities.warnings import rank_zero_deprecation


class LoggerConnector:
def __init__(self, trainer: "pl.Trainer", log_gpu_memory: Optional[str] = None) -> None:
self.trainer = trainer
if log_gpu_memory is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan adding log_gpu_memory to the callback in another PR ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If yes, it should be done the other way around.

rank_zero_deprecation(
"Setting `log_gpu_memory` with the trainer flag is deprecated and will be removed in v1.7.0! "
"Please monitor GPU stats with the `GPUStatsMonitor` callback directly instead."
)
self.log_gpu_memory = log_gpu_memory
self.eval_loop_results = []
self._val_log_step: int = 0
Expand Down
4 changes: 4 additions & 0 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,10 @@ def __init__(

log_gpu_memory: None, 'min_max', 'all'. Might slow performance

.. deprecated:: v1.5
Deprecated in v1.5.0 and will be removed in v1.7.0
Please use the ``GPUStatsMonitor`` callback directly instead.

log_every_n_steps: How often to log within steps (defaults to every 50 steps).

prepare_data_per_node: If True, each LOCAL_RANK=0 will call prepare data.
Expand Down
7 changes: 7 additions & 0 deletions tests/deprecated_api/test_remove_1-7.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,10 @@ def test_v1_7_0_trainer_prepare_data_per_node(tmpdir):
match="Setting `prepare_data_per_node` with the trainer flag is deprecated and will be removed in v1.7.0!"
):
_ = Trainer(prepare_data_per_node=False)


def test_v1_7_0_trainer_log_gpu_memory(tmpdir):
with pytest.deprecated_call(
match="Setting `log_gpu_memory` with the trainer flag is deprecated and will be removed in v1.7.0!"
):
_ = Trainer(log_gpu_memory="min_max")