@@ -33,9 +33,9 @@ Defining schema and backend implementations
3333The general principle behind the dispatcher is that it divides the
3434implementation of an operator into multiple kernels, each of which
3535implements functionality for a specific *dispatch key *; for example,
36- ` CPU `, ` CUDA ` or ` Autograd ` . The end effect is that when you call
37- an operator, we first execute the ` Autograd ` kernel, and then we
38- redispatch to the ` CPU ` or ` CUDA ` kernel depending on the device
36+ CPU, CUDA or Autograd. The end effect is that when you call
37+ an operator, we first execute the Autograd kernel, and then we
38+ redispatch to the CPU or CUDA kernel depending on the device
3939types of the passed in tensors.
4040
4141Let's take a look at the various parts involved in making this
@@ -69,7 +69,7 @@ To do this, we can use the ``TORCH_LIBRARY_IMPL`` macro:
6969 :end-before: END TORCH_LIBRARY_IMPL CPU
7070
7171The ``TORCH_LIBRARY_IMPL `` lets us register implementations for operators on
72- a specific dispatch key (in this case, `` CPU `` ). Each call to ``impl ``
72+ a specific dispatch key (in this case, CPU). Each call to ``impl ``
7373associates a CPU kernel with the corresponding operator (which we previously
7474defined in the ``TORCH_LIBRARY `` block). You can have as many
7575``TORCH_LIBRARY_IMPL `` blocks for a namespace as you like; so for example,
@@ -147,7 +147,7 @@ The autograd function is written as normal using ``torch::autograd::Function``,
147147except that instead of directly writing the implementation in ``forward() ``,
148148we:
149149
150- 1. Turn off autograd handling with the `at::AutoNonVariableTypeMode`` RAII
150+ 1. Turn off autograd handling with the `` at::AutoNonVariableTypeMode `` RAII
151151 guard, and then
1521522. Call the dispatch function ``myadd `` to call back into the dispatcher.
153153
@@ -249,24 +249,6 @@ general rules:
249249* Any operation that does a convolution or gemm under the hood should
250250 probably be float16
251251
252- ..
253-
254- NB: This doesn't work because torch.ops doesn't support names.
255-
256- Named
257- ^^^^^
258-
259- `Named tensors <https://pytorch.org/docs/stable/named_tensor.html >`_ allow
260- users to associate explicit names with tensor dimensions, and then have those
261- dimensions be propagated when you run operations on those tensors. If you
262- define a new operator, you have to also define rules for how names should
263- be checked and propagated. The Named kernel handles implementing these rules.
264-
265- .. literalinclude :: ../advanced_source/dispatcher/op.cpp
266- :language: cpp
267- :start-after: BEGIN TORCH_LIBRARY_IMPL Named
268- :end-before: END TORCH_LIBRARY_IMPL Named
269-
270252Batched
271253^^^^^^^
272254
@@ -282,5 +264,5 @@ Tracer
282264The Tracer dispatch key implements support for recording invocations of operators
283265into a trace when you run ``torch.jit.trace ``. We intend to provide a
284266boxed fallback that will implement tracing for arbitrary operations,
285- see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478> ` to track
267+ see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478 >`_ to track
286268progress.
0 commit comments