Skip to content

Commit 1d5752f

Browse files
authored
Merge branch 'main' into malfet/build-on-m1
2 parents 7a20b80 + 6aaa2b0 commit 1d5752f

29 files changed

+330
-129
lines changed

android/gradle.properties

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
ABI_FILTERS=armeabi-v7a,arm64-v8a,x86,x86_64
22

3-
VERSION_NAME=0.13.0-SNAPSHOT
3+
VERSION_NAME=0.14.0-SNAPSHOT
44
GROUP=org.pytorch
55
MAVEN_GROUP=org.pytorch
66
SONATYPE_STAGING_PROFILE=orgpytorch

docs/source/models.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -471,7 +471,7 @@ Here is an example of how to use the pre-trained video classification models:
471471
from torchvision.io.video import read_video
472472
from torchvision.models.video import r3d_18, R3D_18_Weights
473473
474-
vid, _, _ = read_video("test/assets/videos/v_SoccerJuggling_g23_c01.avi")
474+
vid, _, _ = read_video("test/assets/videos/v_SoccerJuggling_g23_c01.avi", output_format="TCHW")
475475
vid = vid[:32] # optionally shorten duration
476476
477477
# Step 1: Initialize model with the best available weights

gallery/plot_optical_flow.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,8 +72,7 @@ def plot(imgs, **imshow_kwargs):
7272
# single model input.
7373

7474
from torchvision.io import read_video
75-
frames, _, _ = read_video(str(video_path))
76-
frames = frames.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W)
75+
frames, _, _ = read_video(str(video_path), output_format="TCHW")
7776

7877
img1_batch = torch.stack([frames[100], frames[150]])
7978
img2_batch = torch.stack([frames[101], frames[151]])

packaging/build_cmake.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
1515
. "$script_dir/pkg_helpers.bash"
1616

1717
export BUILD_TYPE=conda
18-
setup_env 0.13.0
18+
setup_env 0.14.0
1919
export SOURCE_ROOT_DIR="$PWD"
2020
setup_conda_pytorch_constraint
2121
setup_conda_cudatoolkit_plain_constraint

packaging/build_conda.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
55
. "$script_dir/pkg_helpers.bash"
66

77
export BUILD_TYPE=conda
8-
setup_env 0.13.0
8+
setup_env 0.14.0
99
export SOURCE_ROOT_DIR="$PWD"
1010
setup_conda_pytorch_constraint
1111
setup_conda_cudatoolkit_constraint

packaging/build_wheel.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
55
. "$script_dir/pkg_helpers.bash"
66

77
export BUILD_TYPE=wheel
8-
setup_env 0.13.0
8+
setup_env 0.14.0
99
setup_wheel_python
1010
pip_install numpy pyyaml future ninja
1111
pip_install --upgrade setuptools

references/video_classification/train.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,7 @@ def main(args):
157157
"avi",
158158
"mp4",
159159
),
160+
output_format="TCHW",
160161
)
161162
if args.cache_dataset:
162163
print(f"Saving dataset_train to {cache_path}")
@@ -193,6 +194,7 @@ def main(args):
193194
"avi",
194195
"mp4",
195196
),
197+
output_format="TCHW",
196198
)
197199
if args.cache_dataset:
198200
print(f"Saving dataset_test to {cache_path}")
939 Bytes
Binary file not shown.
939 Bytes
Binary file not shown.
939 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)