@@ -90,7 +90,7 @@ The parameter server just initializes the RPC framework and waits for RPCs from
9090the trainers and master.
9191
9292
93- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
93+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
9494 :language: py
9595 :start-after: BEGIN run_worker
9696 :end-before: END run_worker
@@ -107,7 +107,7 @@ embedding lookup on the parameter server using RemoteModule's ``forward``
107107and passes its output onto the FC layer.
108108
109109
110- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
110+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
111111 :language: py
112112 :start-after: BEGIN hybrid_model
113113 :end-before: END hybrid_model
@@ -134,7 +134,7 @@ which is not supported by ``RemoteModule``.
134134Finally, we create our DistributedOptimizer using all the RRefs and define a
135135CrossEntropyLoss function.
136136
137- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
137+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
138138 :language: py
139139 :start-after: BEGIN setup_trainer
140140 :end-before: END setup_trainer
@@ -151,11 +151,10 @@ batch:
1511514) Use Distributed Autograd to execute a distributed backward pass using the loss.
1521525) Finally, run a Distributed Optimizer step to optimize all the parameters.
153153
154- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
154+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
155155 :language: py
156156 :start-after: BEGIN run_trainer
157157 :end-before: END run_trainer
158158.. code :: python
159159
160160 Source code for the entire example can be found `here <https://github.com/pytorch/examples/tree/master/distributed/rpc/ddp_rpc >`__.
161-
0 commit comments