Skip to content

Commit d0ef6f8

Browse files
authored
Merge branch 'master' into deprecate_metric_functions
2 parents dc2a8aa + 7e8673d commit d0ef6f8

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

docs/source/multi_gpu.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -593,9 +593,9 @@ Below are the possible configurations we support.
593593

594594
Implement Your Own Distributed (DDP) training
595595
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
596-
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.core.LightningModule.`.
596+
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.init_ddp_connection`.
597597

598-
If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.core.LightningModule.configure_ddp`.
598+
If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.configure_ddp`.
599599

600600

601601
----------

docs/source/weights_loading.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ You can customize the checkpointing behavior to monitor any quantity of your tra
4646
1. Calculate any metric or other quantity you wish to monitor, such as validation loss.
4747
2. Log the quantity using :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method, with a key such as `val_loss`.
4848
3. Initializing the :class:`~pytorch_lightning.callbacks.ModelCheckpoint` callback, and set `monitor` to be the key of your quantity.
49-
4. Pass the callback to `checkpoint_callback` :class:`~pytorch_lightning.trainer.Trainer` flag.
49+
4. Pass the callback to the `callbacks` :class:`~pytorch_lightning.trainer.Trainer` flag.
5050

5151
.. code-block:: python
5252

0 commit comments

Comments
 (0)