Skip to content

Commit ddf2f8b

Browse files
authored
Merge branch 'master' into krovatkin/cpp_export
2 parents 7c62708 + 3fde079 commit ddf2f8b

24 files changed

+711
-759
lines changed

.jenkins/build.sh

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,18 @@ rm -rf src
1616
pip install -r $DIR/../requirements.txt
1717

1818
export PATH=/opt/conda/bin:$PATH
19-
conda install -y sphinx==1.8.2 pandas
19+
pip install sphinx==1.8.2 pandas
20+
21+
# install awscli
22+
# pip uninstall awscli
23+
# pip install awscli==1.16.35
24+
2025
# PyTorch Theme
2126
rm -rf src
2227
pip install -e git+git://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
2328
# pillow >= 4.2 will throw error when trying to write mode RGBA as JPEG,
2429
# this is a workaround to the issue.
25-
pip install sphinx-gallery tqdm matplotlib ipython pillow==4.1.1
30+
pip install sphinx-gallery==0.3.1 tqdm matplotlib ipython pillow==4.1.1
2631

2732
# Install torchaudio from source
2833
git clone https://github.com/pytorch/audio --quiet
@@ -36,9 +41,9 @@ aws configure set default.s3.multipart_threshold 5120MB
3641
export NUM_WORKERS=20
3742
if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
3843
# Step 1: Remove runnable code from tutorials that are not supposed to be run
39-
python $DIR/remove_runnable_code.py beginner_source/aws_distributed_training_tutorial.py beginner_source/aws_distributed_training_tutorial.py
44+
python $DIR/remove_runnable_code.py beginner_source/aws_distributed_training_tutorial.py beginner_source/aws_distributed_training_tutorial.py || true
4045
# TODO: Fix bugs in these tutorials to make them runnable again
41-
python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py
46+
python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py || true
4247

4348
# Step 2: Keep certain tutorials based on file count, and remove runnable code in all other tutorials
4449
# IMPORTANT NOTE: We assume that each tutorial has a UNIQUE filename.

_static/imagenet_class_index.json

Lines changed: 1 addition & 0 deletions
Large diffs are not rendered by default.

_static/img/cat_output1.png

-317 KB
Binary file not shown.
49.5 KB
Loading

_static/img/flask.png

173 KB
Loading

_static/img/sample_file.jpeg

43.3 KB
Loading

advanced_source/README.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,6 @@ Advanced Tutorials
1313
Custom C Extensions for PyTorch
1414
https://pytorch.org/tutorials/advanced/c_extension.html
1515

16-
4. super_resolution_with_caffe2.py
17-
Transfering a Model from PyTorch to Caffe2 and Mobile using ONNX
18-
https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html
16+
4. super_resolution_with_onnxruntime.py
17+
Exporting a Model from PyTorch to ONNX and Running it using ONNXRuntime
18+
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

advanced_source/cpp_extension.rst

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -147,23 +147,22 @@ For the "ahead of time" flavor, we build our C++ extension by writing a
147147
``setup.py`` script that uses setuptools to compile our C++ code. For the LLTM, it
148148
looks as simple as this::
149149

150-
from setuptools import setup
151-
from torch.utils.cpp_extension import CppExtension, BuildExtension
150+
from setuptools import setup, Extension
151+
from torch.utils import cpp_extension
152152

153153
setup(name='lltm_cpp',
154-
ext_modules=[CppExtension('lltm', ['lltm.cpp'])],
155-
cmdclass={'build_ext': BuildExtension})
156-
154+
ext_modules=[cpp_extension.CppExtension('lltm_cpp', ['lltm.cpp'])],
155+
cmdclass={'build_ext': cpp_extension.BuildExtension})
157156

158157
In this code, :class:`CppExtension` is a convenience wrapper around
159158
:class:`setuptools.Extension` that passes the correct include paths and sets
160159
the language of the extension to C++. The equivalent vanilla :mod:`setuptools`
161160
code would simply be::
162161

163-
setuptools.Extension(
162+
Extension(
164163
name='lltm_cpp',
165164
sources=['lltm.cpp'],
166-
include_dirs=torch.utils.cpp_extension.include_paths(),
165+
include_dirs=cpp_extension.include_paths(),
167166
language='c++')
168167

169168
:class:`BuildExtension` performs a number of required configuration steps and
@@ -413,7 +412,7 @@ see::
413412
If we call ``help()`` on the function or module, we can see that its signature
414413
matches our C++ code::
415414

416-
In[4] help(lltm.forward)
415+
In[4] help(lltm_cpp.forward)
417416
forward(...) method of builtins.PyCapsule instance
418417
forward(arg0: torch::Tensor, arg1: torch::Tensor, arg2: torch::Tensor, arg3: torch::Tensor, arg4: torch::Tensor) -> List[torch::Tensor]
419418

@@ -473,6 +472,8 @@ small benchmark to see how much performance we gained from rewriting our op in
473472
C++. We'll run the LLTM forwards and backwards a few times and measure the
474473
duration::
475474

475+
import time
476+
476477
import torch
477478

478479
batch_size = 16

advanced_source/numpy_extensions_tutorial.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,14 @@
3535

3636

3737
class BadFFTFunction(Function):
38-
39-
def forward(self, input):
38+
@staticmethod
39+
def forward(ctx, input):
4040
numpy_input = input.detach().numpy()
4141
result = abs(rfft2(numpy_input))
4242
return input.new(result)
4343

44-
def backward(self, grad_output):
44+
@staticmethod
45+
def backward(ctx, grad_output):
4546
numpy_go = grad_output.numpy()
4647
result = irfft2(numpy_go)
4748
return grad_output.new(result)
@@ -51,7 +52,7 @@ def backward(self, grad_output):
5152

5253

5354
def incorrect_fft(input):
54-
return BadFFTFunction()(input)
55+
return BadFFTFunction.apply(input)
5556

5657
###############################################################
5758
# **Example usage of the created layer:**

0 commit comments

Comments
 (0)