You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: torchvision/models/quantization/inception.py
+4-3Lines changed: 4 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -214,9 +214,10 @@ def inception_v3(
214
214
**Important**: In contrast to the other models the inception_v3 expects tensors with a size of
215
215
N x 3 x 299 x 299, so ensure your images are sized accordingly.
216
216
217
-
Note that quantize = True returns a quantized model with 8 bit
218
-
weights. Quantized models only support inference and run on CPUs.
219
-
GPU inference is not yet supported
217
+
.. note::
218
+
Note that ``quantize = True`` returns a quantized model with 8 bit
219
+
weights. Quantized models only support inference and run on CPUs.
220
+
GPU inference is not yet supported.
220
221
221
222
Args:
222
223
weights (:class:`~torchvision.models.quantization.Inception_V3_QuantizedWeights` or :class:`~torchvision.models.Inception_V3_Weights`, optional): The pretrained
Copy file name to clipboardExpand all lines: torchvision/models/quantization/mobilenetv3.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -200,7 +200,7 @@ def mobilenet_v3_large(
200
200
.. note::
201
201
Note that ``quantize = True`` returns a quantized model with 8 bit
202
202
weights. Quantized models only support inference and run on CPUs.
203
-
GPU inference is not yet supported
203
+
GPU inference is not yet supported.
204
204
205
205
Args:
206
206
weights (:class:`~torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights` or :class:`~torchvision.models.MobileNet_V3_Large_Weights`, optional): The
Copy file name to clipboardExpand all lines: torchvision/models/quantization/resnet.py
+20Lines changed: 20 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -270,6 +270,11 @@ def resnet18(
270
270
"""ResNet-18 model from
271
271
`Deep Residual Learning for Image Recognition <https://arxiv.org/abs/1512.03385.pdf>`_
272
272
273
+
.. note::
274
+
Note that ``quantize = True`` returns a quantized model with 8 bit
275
+
weights. Quantized models only support inference and run on CPUs.
276
+
GPU inference is not yet supported.
277
+
273
278
Args:
274
279
weights (:class:`~torchvision.models.quantization.ResNet18_QuantizedWeights` or :class:`~torchvision.models.ResNet18_Weights`, optional): The
275
280
pretrained weights for the model. See
@@ -314,6 +319,11 @@ def resnet50(
314
319
"""ResNet-50 model from
315
320
`Deep Residual Learning for Image Recognition <https://arxiv.org/abs/1512.03385.pdf>`_
316
321
322
+
.. note::
323
+
Note that ``quantize = True`` returns a quantized model with 8 bit
324
+
weights. Quantized models only support inference and run on CPUs.
325
+
GPU inference is not yet supported.
326
+
317
327
Args:
318
328
weights (:class:`~torchvision.models.quantization.ResNet50_QuantizedWeights` or :class:`~torchvision.models.ResNet50_Weights`, optional): The
319
329
pretrained weights for the model. See
@@ -358,6 +368,11 @@ def resnext101_32x8d(
358
368
"""ResNeXt-101 32x8d model from
359
369
`Aggregated Residual Transformation for Deep Neural Networks <https://arxiv.org/abs/1611.05431.pdf>`_
360
370
371
+
.. note::
372
+
Note that ``quantize = True`` returns a quantized model with 8 bit
373
+
weights. Quantized models only support inference and run on CPUs.
374
+
GPU inference is not yet supported.
375
+
361
376
Args:
362
377
weights (:class:`~torchvision.models.quantization.ResNet101_32X8D_QuantizedWeights` or :class:`~torchvision.models.ResNet101_32X8D_Weights`, optional): The
363
378
pretrained weights for the model. See
@@ -396,6 +411,11 @@ def resnext101_64x4d(
396
411
"""ResNeXt-101 64x4d model from
397
412
`Aggregated Residual Transformation for Deep Neural Networks <https://arxiv.org/abs/1611.05431.pdf>`_
398
413
414
+
.. note::
415
+
Note that ``quantize = True`` returns a quantized model with 8 bit
416
+
weights. Quantized models only support inference and run on CPUs.
417
+
GPU inference is not yet supported.
418
+
399
419
Args:
400
420
weights (:class:`~torchvision.models.quantization.ResNet101_64X4D_QuantizedWeights` or :class:`~torchvision.models.ResNet101_64X4D_Weights`, optional): The
0 commit comments