Skip to content

Commit fc42bf0

Browse files
committed
Fixing commands
1 parent 93d654d commit fc42bf0

File tree

3 files changed

+13
-72
lines changed

3 files changed

+13
-72
lines changed

docs/source/models.rst

Lines changed: 1 addition & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -98,58 +98,6 @@ You can construct a model with random weights by calling its constructor:
9898
convnext_large = models.convnext_large()
9999
100100
We provide pre-trained models, using the PyTorch :mod:`torch.utils.model_zoo`.
101-
These can be constructed by passing ``pretrained=True``:
102-
103-
.. code:: python
104-
105-
import torchvision.models as models
106-
resnet18 = models.resnet18(pretrained=True)
107-
alexnet = models.alexnet(pretrained=True)
108-
squeezenet = models.squeezenet1_0(pretrained=True)
109-
vgg16 = models.vgg16(pretrained=True)
110-
densenet = models.densenet161(pretrained=True)
111-
inception = models.inception_v3(pretrained=True)
112-
googlenet = models.googlenet(pretrained=True)
113-
shufflenet = models.shufflenet_v2_x1_0(pretrained=True)
114-
mobilenet_v2 = models.mobilenet_v2(pretrained=True)
115-
mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
116-
mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
117-
resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
118-
wide_resnet50_2 = models.wide_resnet50_2(pretrained=True)
119-
mnasnet = models.mnasnet1_0(pretrained=True)
120-
efficientnet_b0 = models.efficientnet_b0(pretrained=True)
121-
efficientnet_b1 = models.efficientnet_b1(pretrained=True)
122-
efficientnet_b2 = models.efficientnet_b2(pretrained=True)
123-
efficientnet_b3 = models.efficientnet_b3(pretrained=True)
124-
efficientnet_b4 = models.efficientnet_b4(pretrained=True)
125-
efficientnet_b5 = models.efficientnet_b5(pretrained=True)
126-
efficientnet_b6 = models.efficientnet_b6(pretrained=True)
127-
efficientnet_b7 = models.efficientnet_b7(pretrained=True)
128-
efficientnet_v2_s = models.efficientnet_v2_s(pretrained=True)
129-
efficientnet_v2_m = models.efficientnet_v2_m(pretrained=True)
130-
efficientnet_v2_l = models.efficientnet_v2_l(pretrained=True)
131-
regnet_y_400mf = models.regnet_y_400mf(pretrained=True)
132-
regnet_y_800mf = models.regnet_y_800mf(pretrained=True)
133-
regnet_y_1_6gf = models.regnet_y_1_6gf(pretrained=True)
134-
regnet_y_3_2gf = models.regnet_y_3_2gf(pretrained=True)
135-
regnet_y_8gf = models.regnet_y_8gf(pretrained=True)
136-
regnet_y_16gf = models.regnet_y_16gf(pretrained=True)
137-
regnet_y_32gf = models.regnet_y_32gf(pretrained=True)
138-
regnet_x_400mf = models.regnet_x_400mf(pretrained=True)
139-
regnet_x_800mf = models.regnet_x_800mf(pretrained=True)
140-
regnet_x_1_6gf = models.regnet_x_1_6gf(pretrained=True)
141-
regnet_x_3_2gf = models.regnet_x_3_2gf(pretrained=True)
142-
regnet_x_8gf = models.regnet_x_8gf(pretrained=True)
143-
regnet_x_16gf = models.regnet_x_16gf(pretrainedTrue)
144-
regnet_x_32gf = models.regnet_x_32gf(pretrained=True)
145-
vit_b_16 = models.vit_b_16(pretrained=True)
146-
vit_b_32 = models.vit_b_32(pretrained=True)
147-
vit_l_16 = models.vit_l_16(pretrained=True)
148-
vit_l_32 = models.vit_l_32(pretrained=True)
149-
convnext_tiny = models.convnext_tiny(pretrained=True)
150-
convnext_small = models.convnext_small(pretrained=True)
151-
convnext_base = models.convnext_base(pretrained=True)
152-
convnext_large = models.convnext_large(pretrained=True)
153101

154102
Instancing a pre-trained model will download its weights to a cache directory.
155103
This directory can be set using the `TORCH_HOME` environment variable. See
@@ -525,7 +473,7 @@ Obtaining a pre-trained quantized model can be done with a few lines of code:
525473
.. code:: python
526474
527475
import torchvision.models as models
528-
model = models.quantization.mobilenet_v2(pretrained=True, quantize=True)
476+
model = models.quantization.mobilenet_v2(weights=MobileNet_V2_QuantizedWeights.IMAGENET1K_QNNPACK_V1, quantize=True)
529477
model.eval()
530478
# run the model with quantized inputs and weights
531479
out = model(torch.rand(1, 3, 224, 224))

references/classification/README.md

Lines changed: 10 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,8 @@ Since it expects tensors with a size of N x 3 x 299 x 299, to validate the model
4343

4444
```
4545
torchrun --nproc_per_node=8 train.py --model inception_v3\
46-
--val-resize-size 342 --val-crop-size 299 --train-crop-size 299 --test-only --pretrained
46+
--val-resize-size 342 --val-crop-size 299 --train-crop-size 299\
47+
--test-only --weights Inception_V3_Weights.IMAGENET1K_V1
4748
```
4849

4950
### ResNet
@@ -96,22 +97,14 @@ The weights of the B5-B7 variants are ported from Luke Melas' [EfficientNet-PyTo
9697

9798
All models were trained using Bicubic interpolation and each have custom crop and resize sizes. To validate the models use the following commands:
9899
```
99-
torchrun --nproc_per_node=8 train.py --model efficientnet_b0 --interpolation bicubic\
100-
--val-resize-size 256 --val-crop-size 224 --train-crop-size 224 --test-only --pretrained
101-
torchrun --nproc_per_node=8 train.py --model efficientnet_b1 --interpolation bicubic\
102-
--val-resize-size 256 --val-crop-size 240 --train-crop-size 240 --test-only --pretrained
103-
torchrun --nproc_per_node=8 train.py --model efficientnet_b2 --interpolation bicubic\
104-
--val-resize-size 288 --val-crop-size 288 --train-crop-size 288 --test-only --pretrained
105-
torchrun --nproc_per_node=8 train.py --model efficientnet_b3 --interpolation bicubic\
106-
--val-resize-size 320 --val-crop-size 300 --train-crop-size 300 --test-only --pretrained
107-
torchrun --nproc_per_node=8 train.py --model efficientnet_b4 --interpolation bicubic\
108-
--val-resize-size 384 --val-crop-size 380 --train-crop-size 380 --test-only --pretrained
109-
torchrun --nproc_per_node=8 train.py --model efficientnet_b5 --interpolation bicubic\
110-
--val-resize-size 456 --val-crop-size 456 --train-crop-size 456 --test-only --pretrained
111-
torchrun --nproc_per_node=8 train.py --model efficientnet_b6 --interpolation bicubic\
112-
--val-resize-size 528 --val-crop-size 528 --train-crop-size 528 --test-only --pretrained
113-
torchrun --nproc_per_node=8 train.py --model efficientnet_b7 --interpolation bicubic\
114-
--val-resize-size 600 --val-crop-size 600 --train-crop-size 600 --test-only --pretrained
100+
torchrun --nproc_per_node=8 train.py --model efficientnet_b0 --test-only --weights EfficientNet_B0_Weights.IMAGENET1K_V1
101+
torchrun --nproc_per_node=8 train.py --model efficientnet_b1 --test-only --weights EfficientNet_B1_Weights.IMAGENET1K_V1
102+
torchrun --nproc_per_node=8 train.py --model efficientnet_b2 --test-only --weights EfficientNet_B2_Weights.IMAGENET1K_V1
103+
torchrun --nproc_per_node=8 train.py --model efficientnet_b3 --test-only --weights EfficientNet_B3_Weights.IMAGENET1K_V1
104+
torchrun --nproc_per_node=8 train.py --model efficientnet_b4 --test-only --weights EfficientNet_B4_Weights.IMAGENET1K_V1
105+
torchrun --nproc_per_node=8 train.py --model efficientnet_b5 --test-only --weights EfficientNet_B5_Weights.IMAGENET1K_V1
106+
torchrun --nproc_per_node=8 train.py --model efficientnet_b6 --test-only --weights EfficientNet_B6_Weights.IMAGENET1K_V1
107+
torchrun --nproc_per_node=8 train.py --model efficientnet_b7 --test-only --weights EfficientNet_B7_Weights.IMAGENET1K_V1
115108
```
116109

117110

references/optical_flow/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ torchrun --nproc_per_node 8 --nnodes 1 train.py \
5151
### Evaluation
5252

5353
```
54-
torchrun --nproc_per_node 1 --nnodes 1 train.py --val-dataset sintel --batch-size 1 --dataset-root $dataset_root --model raft_large --pretrained
54+
torchrun --nproc_per_node 1 --nnodes 1 train.py --val-dataset sintel --batch-size 1 --dataset-root $dataset_root --model raft_large --weights Raft_Large_Weights.C_T_SKHT_V2
5555
```
5656

5757
This should give an epe of about 1.3822 on the clean pass and 2.7161 on the
@@ -67,6 +67,6 @@ Sintel val final epe: 2.7161 1px: 0.8528 3px: 0.9204 5px: 0.9392 per_image_epe:
6767
You can also evaluate on Kitti train:
6868

6969
```
70-
torchrun --nproc_per_node 1 --nnodes 1 train.py --val-dataset kitti --batch-size 1 --dataset-root $dataset_root --model raft_large --pretrained
70+
torchrun --nproc_per_node 1 --nnodes 1 train.py --val-dataset kitti --batch-size 1 --dataset-root $dataset_root --model raft_large --weights Raft_Large_Weights.C_T_SKHT_V2
7171
Kitti val epe: 4.7968 1px: 0.6388 3px: 0.8197 5px: 0.8661 per_image_epe: 4.5118 f1: 16.0679
7272
```

0 commit comments

Comments
 (0)