-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Proposed refactor
After #11022 Move accelerator to strategies, the teardown logic can be simplified and cleaned up a bit.
Motivation
Reduce redundant code and make code more clean
Pitch
In ddp, ddp_spawning and hovorod plugins, teardown() all has device specific logic like:
https://github.com/four4fish/pytorch-lightning/blob/move-accelerator/pytorch_lightning/plugins/training_type/ddp.py#L507-L511
with accelerator moved into strategy, device related teardown should move to X_accelerator
class GPUAccelerator()
def teardown(self):
# self.lightning_module.cpu() suggested by @ananthsub module movement should be handled by strategies
torch.cuda.empty_cache()
Some teardown() logic are shared between different strategy and same device type, like single_tpu and tpu_spawn has the same teardown() logic:
https://github.com/four4fish/pytorch-lightning/blob/move-accelerator/pytorch_lightning/plugins/training_type/single_tpu.py#L90-L92
we can move it to TPUAccelerator
class TPUAccelerator()
def teardown(self) -> None:
os.environ.pop("PT_XLA_DEBUG", None)
Additional context
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @kaushikb11 @ananthsub