|
18 | 18 | # regarding data preprocessing, model theory and definition, and model |
19 | 19 | # training. |
20 | 20 | # |
21 | | -# What is the TorchScript? |
| 21 | +# What is TorchScript? |
22 | 22 | # ---------------------------- |
23 | 23 | # |
24 | 24 | # During the research and development phase of a deep learning-based |
|
53 | 53 | # will be recorded. In other words, the control flow itself is not |
54 | 54 | # captured. To convert modules and functions containing data-dependent |
55 | 55 | # control flow, a **scripting** mechanism is provided. The |
56 | | -# ``torch.jit.script`` function takes module or function and does not |
57 | | -# requires example inputs. Scripting then explicitly converts the module |
58 | | -# or function code to TorchScript, including all possible control flow |
59 | | -# routes. The one caveat with using scripting is that it only supports |
60 | | -# a subset of Python, so you might need to rewrite the code to make it |
61 | | -# compatible with TorchScript syntax. |
| 56 | +# ``torch.jit.script`` function/decorator takes a module or function and |
| 57 | +# does not requires example inputs. Scripting then explicitly converts |
| 58 | +# the module or function code to TorchScript, including all control flows. |
| 59 | +# One caveat with using scripting is that it only supports a subset of |
| 60 | +# Python, so you might need to rewrite the code to make it compatible |
| 61 | +# with the TorchScript syntax. |
62 | 62 | # |
63 | 63 | # For all details relating to the supported features, see the TorchScript |
64 | 64 | # `language reference <https://pytorch.org/docs/master/jit.html>`__. To |
65 | 65 | # provide the maximum flexibility, you can also mix tracing and scripting |
66 | 66 | # modes together to represent your whole program, and these techniques can |
67 | 67 | # be applied incrementally. |
68 | 68 | # |
| 69 | +# .. figure:: /_static/img/chatbot/pytorch_workflow.png |
| 70 | +# :align: center |
| 71 | +# :alt: workflow |
69 | 72 | # |
70 | 73 |
|
71 | 74 |
|
@@ -385,7 +388,7 @@ def forward(self, hidden, encoder_outputs): |
385 | 388 | # TorchScript Notes: |
386 | 389 | # ~~~~~~~~~~~~~~~~~~~~~~ |
387 | 390 | # |
388 | | -# Similarly to the ``EncoderRNN```, this module does not contain any |
| 391 | +# Similarly to the ``EncoderRNN``, this module does not contain any |
389 | 392 | # data-dependent control flow. Therefore, we can once again use |
390 | 393 | # **tracing** to convert this model to TorchScript after it |
391 | 394 | # is initialized and its parameters are loaded. |
@@ -692,7 +695,7 @@ def evaluateExample(sentence, searcher, voc): |
692 | 695 | # for some part of your models, you must call .to(device) to set the device |
693 | 696 | # options of the models and .eval() to set the dropout layers to test mode |
694 | 697 | # **before** tracing the models. `TracedModule` objects do not inherit the |
695 | | -# ``to``` or ``eval``` methods. Since in this tutorial we are only using |
| 698 | +# ``to`` or ``eval`` methods. Since in this tutorial we are only using |
696 | 699 | # scripting instead of tracing, we only need to do this before we do |
697 | 700 | # evaluation (which is the same as we normally do in eager mode). |
698 | 701 | # |
|
0 commit comments