Skip to content

Commit f6095d6

Browse files
committed
fix docs references
1 parent 9bb9d2e commit f6095d6

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/advanced/multi_gpu.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -580,9 +580,9 @@ Below are the possible configurations we support.
580580

581581
Implement Your Own Distributed (DDP) training
582582
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
583-
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.init_ddp_connection`.
583+
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.init_ddp_connection`.
584584

585-
If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.configure_ddp`.
585+
If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.configure_ddp`.
586586

587587

588588
----------
@@ -692,7 +692,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh
692692

693693
.. code-block:: python
694694
695-
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
695+
from pytorch_lightning.plugins.training_type.rpc_sequential import RPCSequentialPlugin
696696
from pytorch_lightning import LightningModule
697697
698698
class MyModel(LightningModule):
@@ -702,7 +702,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh
702702
703703
# Split my module across 4 gpus, one layer each
704704
model = MyModel()
705-
plugin = DDPSequentialPlugin(balance=[1, 1, 1, 1])
705+
plugin = RPCSequentialPlugin(balance=[1, 1, 1, 1])
706706
trainer = Trainer(accelerator='ddp', gpus=4, plugins=[plugin])
707707
trainer.fit(model)
708708

0 commit comments

Comments
 (0)