-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Proposed refactor
Internally, our checks against the type of Accelerator, Precision type, strategy is not robust towards custom instances passed in by the user.
Motivation
Internally, some operations in the optimization, logging, etc. need a different code path depending on 1) Accelerator type (cpu, gpu) or 2) Precision type (apex, native) or 3) strategy type (ddp, ddp-spawn, ...). Currently we have this pattern:
if trainer._device_type == DeviceType.CPU:
# do something only for cpu
if trainer._amp_backend == AMPType.Apex:
# do something differently for apexPitch
Change these to
if isinstance(trainer.accelerator, CPUAccelerator):
# do something only for cpu
if isinstance(trainer.precision_plugin, ApexPrecisionPlugin):
# do something differently for apexThis has the benefits:
- User passes in custom plugins (subclasses of our plugins)
- Encapsulation: Protected members _device_type, _strategy_type, won't be abusively accessed publicly anymore. They remain an implementation detail of AcceleratorConnector
- Minimally simplifies AcceleratorConnector logic
Additional context
Discusson started in #10596
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @Borda @justusschock @awaelchli @rohitgr7 @kaushikb11 @akihironitta @ananthsub