Skip to content

Commit e77fff3

Browse files
committed
minor doc improv
1 parent 4fd16e7 commit e77fff3

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

doc/frameworks/pytorch/using_pytorch.rst

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -202,18 +202,19 @@ Distributed PyTorch Training
202202

203203
SageMaker supports the `PyTorch DistributedDataParallel (DDP)
204204
<https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html>`_
205-
package. You simply need to check the variables in your distributed training script,
206-
such as the world size and the rank of the current host,
207-
to be matching with the specs of the ML instance type you use.
208-
And then launch the training job using the SageMaker PyTorch estimator
205+
package. You simply need to check the variables in your training script,
206+
such as the world size and the rank of the current host, when initializing
207+
process groups for distributed training.
208+
And then, launch the training job using the
209+
:class:`sagemaker.pytorch.estimator.PyTorch` estimator class
209210
with the ``pytorchddp`` option as the distribution strategy.
210211

211212
.. note::
212213

213214
This PyTorch DDP support is available
214215
in the SageMaker PyTorch Deep Learning Containers v1.12 and later.
215216

216-
Adapt your Training Script
217+
Adapt Your Training Script
217218
--------------------------
218219

219220
To initialize distributed training in your script, call

0 commit comments

Comments
 (0)