Skip to content

Commit 593ae70

Browse files
ananthsubSeanNaren
authored andcommitted
Remove hardcoding of rank_zero_only.rank in accelerator connector (#6878)
(cherry picked from commit 968ac09)
1 parent 9799cfd commit 593ae70

File tree

2 files changed

+3
-8
lines changed

2 files changed

+3
-8
lines changed

CHANGELOG.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
170170

171171
### Fixed
172172

173-
- Set better defaults for `rank_zero_only.rank` when training is launched with SLURM and torchelastic ([#6802](https://github.com/PyTorchLightning/pytorch-lightning/pull/6802/))
173+
- Set better defaults for `rank_zero_only.rank` when training is launched with SLURM and torchelastic:
174+
* Support SLURM and torchelastic global rank environment variables ([#5715](https://github.com/PyTorchLightning/pytorch-lightning/pull/5715))
175+
* Remove hardcoding of local rank in accelerator connector ([#6878](https://github.com/PyTorchLightning/pytorch-lightning/pull/6878))
174176

175177

176178
- Sanitize `None` params during pruning ([#6836](https://github.com/PyTorchLightning/pytorch-lightning/pull/6836))

pytorch_lightning/trainer/connectors/accelerator_connector.py

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,6 @@
5353
device_parser,
5454
DeviceType,
5555
DistributedType,
56-
rank_zero_only,
5756
)
5857
from pytorch_lightning.utilities.distributed import rank_zero_info, rank_zero_warn
5958
from pytorch_lightning.utilities.exceptions import MisconfigurationException
@@ -106,12 +105,6 @@ def __init__(
106105
self._training_type_plugin: Optional[TrainingTypePlugin] = None
107106
self._cluster_environment: Optional[ClusterEnvironment] = None
108107

109-
# init the default rank if exists
110-
# we need to call this here or NVIDIA flags and other messaging in init will show on all ranks
111-
# this way we only show it on rank 0
112-
if "LOCAL_RANK" in os.environ:
113-
rank_zero_only.rank = int(os.environ["LOCAL_RANK"])
114-
115108
# for gpus allow int, string and gpu list
116109
if auto_select_gpus and isinstance(gpus, int):
117110
self.gpus = pick_multiple_gpus(gpus)

0 commit comments

Comments
 (0)