You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: intermediate_source/dynamic_quantization_bert_tutorial.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
(experimental) Dynamic Quantization on BERT
1
+
(beta) Dynamic Quantization on BERT
2
2
===========================================
3
3
4
4
.. tip::
5
-
To get the most of this tutorial, we suggest using this
5
+
To get the most of this tutorial, we suggest using this
6
6
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb>`_. This will allow you to experiment with the information presented below.
Pytorch supports memory formats (and provides back compatibility with existing models including eager, JIT, and TorchScript) by utilizing existing strides structure.
@@ -34,7 +34,7 @@
34
34
# Memory Format API
35
35
# -----------------------
36
36
#
37
-
# Here is how to convert tensors between contiguous and channels
37
+
# Here is how to convert tensors between contiguous and channels
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
195
+
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
196
196
#
197
197
198
198
# Need to be done once, after model initialization (or load)
# We've explained what dynamic quantization is, what benefits it brings,
282
282
# and you have used the ``torch.quantization.quantize_dynamic()`` function
283
283
# to quickly quantize a simple LSTM model.
284
-
#
284
+
#
285
285
# This was a fast and high level treatment of this material; for more
286
-
# detail please continue learning with `(experimental) Dynamic Quantization on an LSTM Word Language Model Tutorial <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_.
287
-
#
288
-
#
286
+
# detail please continue learning with `(beta) Dynamic Quantization on an LSTM Word Language Model Tutorial <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_.
287
+
#
288
+
#
289
289
# Additional Resources
290
290
# =========
291
291
# Documentation
292
292
# ~~~~~~~~~~~~~~
293
-
#
293
+
#
294
294
# `Quantization API Documentaion <https://pytorch.org/docs/stable/quantization.html>`_
295
-
#
295
+
#
296
296
# Tutorials
297
297
# ~~~~~~~~~~~~~~
298
-
#
299
-
# `(experimental) Dynamic Quantization on BERT <https://pytorch.org/tutorials/intermediate/dynamic\_quantization\_bert\_tutorial.html>`_
300
-
#
301
-
# `(experimental) Dynamic Quantization on an LSTM Word Language Model <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_
302
-
#
298
+
#
299
+
# `(beta) Dynamic Quantization on BERT <https://pytorch.org/tutorials/intermediate/dynamic\_quantization\_bert\_tutorial.html>`_
300
+
#
301
+
# `(beta) Dynamic Quantization on an LSTM Word Language Model <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_
302
+
#
303
303
# Blogs
304
304
# ~~~~~~~~~~~~~~
305
305
# ` Introduction to Quantization on PyTorch <https://pytorch.org/blog/introduction-to-quantization-on-pytorch/>`_
0 commit comments