Skip to content

Commit 09a8001

Browse files
authored
Trainer: fix support for non-distributed PyTorch (#14971)
* Trainer: fix non-distributed use * Update CHANGELOG
1 parent 3028fd2 commit 09a8001

File tree

2 files changed

+4
-1
lines changed

2 files changed

+4
-1
lines changed

src/pytorch_lightning/CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -307,6 +307,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
307307
- Fixed an issue with terminating the trainer profiler when a `StopIteration` exception is raised while using an `IterableDataset` ([#14940](https://github.com/Lightning-AI/lightning/pull/14945))
308308

309309

310+
- Fixed `Trainer` support for PyTorch built without distributed support ([#14971](https://github.com/Lightning-AI/lightning/pull/14971))
311+
312+
310313

311314
## [1.7.7] - 2022-09-22
312315

src/pytorch_lightning/trainer/trainer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2233,7 +2233,7 @@ def _evaluation_context(accelerator: Accelerator) -> Generator:
22332233
# and HPU & TPU accelerators.
22342234
context_manager_class = (
22352235
torch.inference_mode
2236-
if not (dist.is_initialized() and dist.get_backend() == "gloo")
2236+
if not (dist.is_available() and dist.is_initialized() and dist.get_backend() == "gloo")
22372237
and not isinstance(accelerator, HPUAccelerator)
22382238
and not isinstance(accelerator, TPUAccelerator)
22392239
else torch.no_grad

0 commit comments

Comments
 (0)