-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
Description
Proposed refactoring or deprecation
Current names:
pytorch_lightning/plugins/precision/
├── apex_amp.py
├── deepspeed_precision.py
├── double.py
├── fully_sharded_native_amp.py
├── ipu_precision.py
├── mixed.py
├── native_amp.py
├── precision_plugin.py
├── sharded_native_amp.py
├── tpu.py
└── tpu_bfloat.pyMotivation
Had to choose this when working on #10020
Pitch
ipu_precision.py -> ipu.py
deepspeed_precision.py -> deepspeed.py
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
-
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
-
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
justusschock and ananthsub