File tree Expand file tree Collapse file tree 2 files changed +10
-3
lines changed Expand file tree Collapse file tree 2 files changed +10
-3
lines changed Original file line number Diff line number Diff line change 11# -*- coding: utf-8 -*-
22"""
3- Model Parallel Best Practices
3+ Single-Machine Model Parallel Best Practices
44================================
55**Author**: `Shen Li <https://mrshenli.github.io/>`_
66
2727of model parallel. It is up to the readers to apply the ideas to real-world
2828applications.
2929
30+ .. note::
31+
32+ For distributed model parallel training where a model spans multiple
33+ servers, please refer to
34+ `Getting Started With Distributed RPC Framework <rpc_tutorial.html>__
35+ for examples and details.
36+
3037Basic Usage
3138-----------
3239"""
Original file line number Diff line number Diff line change @@ -12,10 +12,10 @@ This tutorial uses two simple examples to demonstrate how to build distributed
1212training with the `torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html >`__
1313package which is first introduced as an experimental feature in PyTorch v1.4.
1414Source code of the two examples can be found in
15- `PyTorch examples <https://github.com/pytorch/examples >`__
15+ `PyTorch examples <https://github.com/pytorch/examples >`__.
1616
1717Previous tutorials,
18- `Getting Started With Distributed Data Parallel <https://pytorch.org/tutorials/intermediate/ ddp_tutorial.html >`__
18+ `Getting Started With Distributed Data Parallel <ddp_tutorial.html >`__
1919 and `Writing Distributed Applications With PyTorch <https://pytorch.org/tutorials/intermediate/dist_tuto.html >`__,
2020described `DistributedDataParallel <https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html >`__
2121which supports a specific training paradigm where the model is replicated across
You can’t perform that action at this time.
0 commit comments