Skip to content

Commit c6ec282

Browse files
author
Lara
committed
Merge branch 'master' of https://github.com/lara-hdr/tutorials into lahaidar/ort_tutorial
2 parents fc43498 + d13664e commit c6ec282

File tree

8 files changed

+29
-23
lines changed

8 files changed

+29
-23
lines changed

.jenkins/build.sh

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,29 +16,34 @@ rm -rf src
1616
pip install -r $DIR/../requirements.txt
1717

1818
export PATH=/opt/conda/bin:$PATH
19-
conda install -y sphinx==1.8.2 pandas
19+
pip install sphinx==1.8.2 pandas
20+
21+
# install awscli
22+
# pip uninstall awscli
23+
# pip install awscli==1.16.35
24+
2025
# PyTorch Theme
2126
rm -rf src
2227
pip install -e git+git://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
2328
# pillow >= 4.2 will throw error when trying to write mode RGBA as JPEG,
2429
# this is a workaround to the issue.
25-
pip install sphinx-gallery tqdm matplotlib ipython pillow==4.1.1
30+
pip install sphinx-gallery==0.3.1 tqdm matplotlib ipython pillow==4.1.1
2631

2732
# Install torchaudio from source
28-
git clone https://github.com/pytorch/audio --quiet
29-
pushd audio
30-
python setup.py install
31-
popd
33+
# git clone https://github.com/pytorch/audio --quiet
34+
# pushd audio
35+
# python setup.py install
36+
# popd
3237

3338
aws configure set default.s3.multipart_threshold 5120MB
3439

3540
# Decide whether to parallelize tutorial builds, based on $JOB_BASE_NAME
3641
export NUM_WORKERS=20
3742
if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
3843
# Step 1: Remove runnable code from tutorials that are not supposed to be run
39-
python $DIR/remove_runnable_code.py beginner_source/aws_distributed_training_tutorial.py beginner_source/aws_distributed_training_tutorial.py
44+
python $DIR/remove_runnable_code.py beginner_source/aws_distributed_training_tutorial.py beginner_source/aws_distributed_training_tutorial.py || true
4045
# TODO: Fix bugs in these tutorials to make them runnable again
41-
python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py
46+
python $DIR/remove_runnable_code.py beginner_source/audio_classifier_tutorial.py beginner_source/audio_classifier_tutorial.py || true
4247

4348
# Step 2: Keep certain tutorials based on file count, and remove runnable code in all other tutorials
4449
# IMPORTANT NOTE: We assume that each tutorial has a UNIQUE filename.
@@ -180,4 +185,4 @@ else
180185
fi
181186

182187
rm -rf vision
183-
rm -rf audio
188+
# rm -rf audio

advanced_source/numpy_extensions_tutorial.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,14 @@
3535

3636

3737
class BadFFTFunction(Function):
38-
39-
def forward(self, input):
38+
@staticmethod
39+
def forward(ctx, input):
4040
numpy_input = input.detach().numpy()
4141
result = abs(rfft2(numpy_input))
4242
return input.new(result)
4343

44-
def backward(self, grad_output):
44+
@staticmethod
45+
def backward(ctx, grad_output):
4546
numpy_go = grad_output.numpy()
4647
result = irfft2(numpy_go)
4748
return grad_output.new(result)
@@ -51,7 +52,7 @@ def backward(self, grad_output):
5152

5253

5354
def incorrect_fft(input):
54-
return BadFFTFunction()(input)
55+
return BadFFTFunction.apply(input)
5556

5657
###############################################################
5758
# **Example usage of the created layer:**

beginner_source/PyTorch Cheat.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,13 +26,13 @@ from torch.jit import script, trace # hybrid frontend decorator and tracin
2626
```
2727
See [autograd](https://pytorch.org/docs/stable/autograd.html), [nn](https://pytorch.org/docs/stable/nn.html), [functional](https://pytorch.org/docs/stable/nn.html#torch-nn-functional) and [optim](https://pytorch.org/docs/stable/optim.html)
2828

29-
### Hybrid frontend
29+
### Torchscript and JIT
3030

3131
```
3232
torch.jit.trace() # takes your module or function and an example data input, and traces the computational steps that the data encounters as it progresses through the model
3333
@script # decorator used to indicate data-dependent control flow within the code being traced
3434
```
35-
See [hybrid frontend](https://pytorch.org/docs/stable/hybridfrontend)
35+
See [Torchscript](https://pytorch.org/docs/stable/jit.html)
3636

3737
### ONNX
3838

beginner_source/blitz/autograd_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -185,5 +185,5 @@
185185
###############################################################
186186
# **Read Later:**
187187
#
188-
# Documentation of ``autograd`` and ``Function`` is at
189-
# https://pytorch.org/docs/autograd
188+
# Document about ``autograd.Function`` is at
189+
# https://pytorch.org/docs/stable/autograd.html#function

beginner_source/blitz/cifar10_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ def forward(self, x):
235235
100 * correct / total))
236236

237237
########################################################################
238-
# That looks waaay better than chance, which is 10% accuracy (randomly picking
238+
# That looks way better than chance, which is 10% accuracy (randomly picking
239239
# a class out of 10 classes).
240240
# Seems like the network learnt something.
241241
#
@@ -298,7 +298,7 @@ def forward(self, x):
298298
# inputs, labels = data[0].to(device), data[1].to(device)
299299
#
300300
# Why dont I notice MASSIVE speedup compared to CPU? Because your network
301-
# is realllly small.
301+
# is really small.
302302
#
303303
# **Exercise:** Try increasing the width of your network (argument 2 of
304304
# the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –

beginner_source/dcgan_faces_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
# :math:`D(x)` is the discriminator network which outputs the (scalar)
5454
# probability that :math:`x` came from training data rather than the
5555
# generator. Here, since we are dealing with images the input to
56-
# :math:`D(x)` is an image of HWC size 3x64x64. Intuitively, :math:`D(x)`
56+
# :math:`D(x)` is an image of CHW size 3x64x64. Intuitively, :math:`D(x)`
5757
# should be HIGH when :math:`x` comes from training data and LOW when
5858
# :math:`x` comes from the generator. :math:`D(x)` can also be thought of
5959
# as a traditional binary classifier.

beginner_source/ptcheat.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ See `autograd <https://pytorch.org/docs/stable/autograd.html>`__,
2929
`functional <https://pytorch.org/docs/stable/nn.html#torch-nn-functional>`__
3030
and `optim <https://pytorch.org/docs/stable/optim.html>`__
3131

32-
Hybrid frontend
32+
Torchscript and JIT
3333
---------------
3434

3535
.. code-block:: python
@@ -41,7 +41,7 @@ Hybrid frontend
4141
@script # decorator used to indicate data-dependent
4242
# control flow within the code being traced
4343
44-
See `hybrid frontend <https://pytorch.org/docs/stable/hybridfrontend>`__
44+
See `Torchscript <https://pytorch.org/docs/stable/jit.html>`__
4545

4646
ONNX
4747
----

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Refer to ./jenkins/build.sh for tutorial build instructions
22

33
sphinx
4-
sphinx-gallery
4+
sphinx-gallery==0.3.1
55
tqdm
66
numpy
77
matplotlib

0 commit comments

Comments
 (0)