Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions docs/source-pytorch/api_references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,13 +176,11 @@ precision
ColossalAIPrecisionPlugin
DeepSpeedPrecisionPlugin
DoublePrecisionPlugin
FullyShardedNativeMixedPrecisionPlugin
FullyShardedNativeNativeMixedPrecisionPlugin
HPUPrecisionPlugin
IPUPrecisionPlugin
MixedPrecisionPlugin
PrecisionPlugin
ShardedNativeMixedPrecisionPlugin
TPUBf16PrecisionPlugin
TPUPrecisionPlugin

Expand Down Expand Up @@ -276,9 +274,6 @@ strategies
BaguaStrategy
ColossalAIStrategy
DDPFullyShardedNativeStrategy
DDPFullyShardedStrategy
DDPShardedStrategy
DDPSpawnShardedStrategy
DDPSpawnStrategy
DDPStrategy
DataParallelStrategy
Expand Down
1 change: 0 additions & 1 deletion docs/source-pytorch/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,6 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
"numpy": ("https://numpy.org/doc/stable/", None),
"PIL": ("https://pillow.readthedocs.io/en/stable/", None),
"torchmetrics": ("https://torchmetrics.readthedocs.io/en/stable/", None),
"fairscale": ("https://fairscale.readthedocs.io/en/latest/", None),
"graphcore": ("https://docs.graphcore.ai/en/latest/", None),
}

Expand Down
2 changes: 0 additions & 2 deletions docs/source-pytorch/extensions/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,11 @@ The full list of built-in precision plugins is listed below.
ColossalAIPrecisionPlugin
DeepSpeedPrecisionPlugin
DoublePrecisionPlugin
FullyShardedNativeMixedPrecisionPlugin
FullyShardedNativeNativeMixedPrecisionPlugin
HPUPrecisionPlugin
IPUPrecisionPlugin
MixedPrecisionPlugin
PrecisionPlugin
ShardedNativeMixedPrecisionPlugin
TPUBf16PrecisionPlugin
TPUPrecisionPlugin

Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/guides/speed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ GPU Training
Lightning supports a variety of plugins to speed up distributed GPU training. Most notably:

* :class:`~pytorch_lightning.strategies.DDPStrategy`
* :class:`~pytorch_lightning.strategies.DDPShardedStrategy`
* :class:`~pytorch_lightning.strategies.DDPFullyShardedNativeStrategy`
* :class:`~pytorch_lightning.strategies.DeepSpeedStrategy`

.. code-block:: python
Expand Down
1 change: 0 additions & 1 deletion requirements/pytorch/check-avail-strategies.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
if __name__ == "__main__":
import bagua # noqa: F401
import deepspeed # noqa: F401
import fairscale # noqa: F401
1 change: 0 additions & 1 deletion requirements/pytorch/strategies.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,4 @@
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment

# colossalai>=0.1.10 # TODO: uncomment when there's a stable version released
fairscale>=0.4.5, <0.4.13
deepspeed>=0.6.0, <=0.7.0
1 change: 0 additions & 1 deletion src/lightning_app/components/multi_node/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ def run(
try:
pkg = importlib.import_module(pkg_name)
trainers.append(pkg.Trainer)
strategies.append(pkg.strategies.DDPSpawnShardedStrategy)
strategies.append(pkg.strategies.DDPSpawnStrategy)
mps_accelerators.append(pkg.accelerators.MPSAccelerator)
except (ImportError, ModuleNotFoundError):
Expand Down
8 changes: 8 additions & 0 deletions src/pytorch_lightning/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Removed `Trainer(strategy='horovod')` support ([#16150](https://github.com/Lightning-AI/lightning/pull/16150))

- `FairScale` removal (in favor of PyTorch's FSDP implementation) ([#16400](https://github.com/PyTorchLightning/pytorch-lightning/pull/16400))
* Removed the `pytorch_lightning.overrides.fairscale.LightningShardedDataParallel` class
* Removed the `pytorch_lightning.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin` class
* Removed the `pytorch_lightning.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin` class
* Removed the `pytorch_lightning.strategies.fully_sharded.DDPFullyShardedStrategy` (fsdp) class
* Removed the `pytorch_lightning.strategies.sharded.DDPShardedStrategy` (ddp_sharded) class
* Removed the `pytorch_lightning.strategies.sharded_spawn.DDPSpawnShardedStrategy` (ddp_sharded_spawn) class

- Removed legacy device arguments in Trainer ([#16171](https://github.com/Lightning-AI/lightning/pull/16171))
* Removed the `Trainer(gpus=...)` argument
* Removed the `Trainer(tpu_cores=...)` argument
Expand Down
4 changes: 2 additions & 2 deletions src/pytorch_lightning/callbacks/stochastic_weight_avg.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
import pytorch_lightning as pl
from lightning_fabric.utilities.types import LRScheduler
from pytorch_lightning.callbacks.callback import Callback
from pytorch_lightning.strategies import DDPFullyShardedStrategy, DeepSpeedStrategy
from pytorch_lightning.strategies import DeepSpeedStrategy
from pytorch_lightning.strategies.fully_sharded_native import DDPFullyShardedNativeStrategy
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.rank_zero import rank_zero_info, rank_zero_warn
Expand Down Expand Up @@ -146,7 +146,7 @@ def pl_module_contains_batch_norm(pl_module: "pl.LightningModule") -> bool:
return any(isinstance(module, nn.modules.batchnorm._BatchNorm) for module in pl_module.modules())

def setup(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", stage: str) -> None:
if isinstance(trainer.strategy, (DDPFullyShardedStrategy, DDPFullyShardedNativeStrategy, DeepSpeedStrategy)):
if isinstance(trainer.strategy, (DDPFullyShardedNativeStrategy, DeepSpeedStrategy)):
raise MisconfigurationException("SWA does not currently support sharded models.")

# copy the model before moving it to accelerator device.
Expand Down
42 changes: 0 additions & 42 deletions src/pytorch_lightning/overrides/fairscale.py

This file was deleted.

4 changes: 0 additions & 4 deletions src/pytorch_lightning/plugins/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,10 @@
from pytorch_lightning.plugins.precision.deepspeed import DeepSpeedPrecisionPlugin
from pytorch_lightning.plugins.precision.double import DoublePrecisionPlugin
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.fully_sharded_native_amp import FullyShardedNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.hpu import HPUPrecisionPlugin
from pytorch_lightning.plugins.precision.ipu import IPUPrecisionPlugin
from pytorch_lightning.plugins.precision.native_amp import MixedPrecisionPlugin
from pytorch_lightning.plugins.precision.precision_plugin import PrecisionPlugin
from pytorch_lightning.plugins.precision.sharded_native_amp import ShardedNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.tpu import TPUPrecisionPlugin
from pytorch_lightning.plugins.precision.tpu_bf16 import TPUBf16PrecisionPlugin

Expand All @@ -33,8 +31,6 @@
"HPUPrecisionPlugin",
"MixedPrecisionPlugin",
"PrecisionPlugin",
"ShardedNativeMixedPrecisionPlugin",
"FullyShardedNativeMixedPrecisionPlugin",
"FullyShardedNativeNativeMixedPrecisionPlugin",
"TPUPrecisionPlugin",
"TPUBf16PrecisionPlugin",
Expand Down
4 changes: 0 additions & 4 deletions src/pytorch_lightning/plugins/precision/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,10 @@
from pytorch_lightning.plugins.precision.deepspeed import DeepSpeedPrecisionPlugin
from pytorch_lightning.plugins.precision.double import DoublePrecisionPlugin
from pytorch_lightning.plugins.precision.fsdp_native_native_amp import FullyShardedNativeNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.fully_sharded_native_amp import FullyShardedNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.hpu import HPUPrecisionPlugin
from pytorch_lightning.plugins.precision.ipu import IPUPrecisionPlugin
from pytorch_lightning.plugins.precision.native_amp import MixedPrecisionPlugin
from pytorch_lightning.plugins.precision.precision_plugin import PrecisionPlugin
from pytorch_lightning.plugins.precision.sharded_native_amp import ShardedNativeMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.tpu import TPUPrecisionPlugin
from pytorch_lightning.plugins.precision.tpu_bf16 import TPUBf16PrecisionPlugin

Expand All @@ -29,12 +27,10 @@
"DeepSpeedPrecisionPlugin",
"DoublePrecisionPlugin",
"FullyShardedNativeNativeMixedPrecisionPlugin",
"FullyShardedNativeMixedPrecisionPlugin",
"HPUPrecisionPlugin",
"IPUPrecisionPlugin",
"MixedPrecisionPlugin",
"PrecisionPlugin",
"ShardedNativeMixedPrecisionPlugin",
"TPUPrecisionPlugin",
"TPUBf16PrecisionPlugin",
]

This file was deleted.

53 changes: 0 additions & 53 deletions src/pytorch_lightning/plugins/precision/sharded_native_amp.py

This file was deleted.

3 changes: 1 addition & 2 deletions src/pytorch_lightning/serve/servable_module_validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,14 @@
import pytorch_lightning as pl
from pytorch_lightning.callbacks import Callback
from pytorch_lightning.serve.servable_module import ServableModule
from pytorch_lightning.strategies import DDPFullyShardedNativeStrategy, DDPFullyShardedStrategy, DeepSpeedStrategy
from pytorch_lightning.strategies import DDPFullyShardedNativeStrategy, DeepSpeedStrategy
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.model_helpers import is_overridden
from pytorch_lightning.utilities.rank_zero import rank_zero_only

_NOT_SUPPORTED_STRATEGIES = (
DeepSpeedStrategy,
DDPFullyShardedNativeStrategy,
DDPFullyShardedStrategy,
)

_logger = logging.getLogger(__name__)
Expand Down
3 changes: 0 additions & 3 deletions src/pytorch_lightning/strategies/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,10 @@
from pytorch_lightning.strategies.ddp_spawn import DDPSpawnStrategy # noqa: F401
from pytorch_lightning.strategies.deepspeed import DeepSpeedStrategy # noqa: F401
from pytorch_lightning.strategies.dp import DataParallelStrategy # noqa: F401
from pytorch_lightning.strategies.fully_sharded import DDPFullyShardedStrategy # noqa: F401
from pytorch_lightning.strategies.fully_sharded_native import DDPFullyShardedNativeStrategy # noqa: F401
from pytorch_lightning.strategies.hpu_parallel import HPUParallelStrategy # noqa: F401
from pytorch_lightning.strategies.ipu import IPUStrategy # noqa: F401
from pytorch_lightning.strategies.parallel import ParallelStrategy # noqa: F401
from pytorch_lightning.strategies.sharded import DDPShardedStrategy # noqa: F401
from pytorch_lightning.strategies.sharded_spawn import DDPSpawnShardedStrategy # noqa: F401
from pytorch_lightning.strategies.single_device import SingleDeviceStrategy # noqa: F401
from pytorch_lightning.strategies.single_hpu import SingleHPUStrategy # noqa: F401
from pytorch_lightning.strategies.single_tpu import SingleTPUStrategy # noqa: F401
Expand Down
6 changes: 0 additions & 6 deletions src/pytorch_lightning/strategies/ddp.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@
from pytorch_lightning.core.optimizer import LightningOptimizer
from pytorch_lightning.overrides.base import _LightningModuleWrapperBase, _LightningPrecisionModuleWrapperBase
from pytorch_lightning.overrides.distributed import prepare_for_backward
from pytorch_lightning.overrides.fairscale import _FAIRSCALE_AVAILABLE
from pytorch_lightning.plugins.precision import PrecisionPlugin
from pytorch_lightning.strategies.launchers.subprocess_script import _SubprocessScriptLauncher
from pytorch_lightning.strategies.parallel import ParallelStrategy
Expand All @@ -49,10 +48,6 @@
from pytorch_lightning.utilities.rank_zero import rank_zero_info, rank_zero_only
from pytorch_lightning.utilities.types import PredictStep, STEP_OUTPUT, TestStep, ValidationStep

if _FAIRSCALE_AVAILABLE:
from fairscale.optim import OSS
else:
OSS = object
if torch.distributed.is_available():
from torch.distributed.algorithms.model_averaging.averagers import ModelAverager

Expand Down Expand Up @@ -230,7 +225,6 @@ def _enable_model_averaging(self) -> None:
if (
is_distributed_optimizer
or isinstance(optimizer, ZeroRedundancyOptimizer)
or (_FAIRSCALE_AVAILABLE and isinstance(optimizer, OSS))
or isinstance(optimizer, PostLocalSGDOptimizer)
):
raise ValueError(
Expand Down
Loading