Skip to content

Conversation

@wanchaol
Copy link
Contributor

No description provided.

@netlify
Copy link

netlify bot commented Jul 19, 2019

Deploy preview for pytorch-tutorials-preview ready!

Built with commit 9fa7632

https://deploy-preview-565--pytorch-tutorials-preview.netlify.com

Copy link
Contributor

@Chillee Chillee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some notes/typos.

# training.
#
# What is the Hybrid Frontend?
# What is the TorchScript?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is Torchscript?

# :align: center
# :alt: workflow
# control flow, a **scripting** mechanism is provided. The
# ``torch.jit.script`` function takes module or function and does not
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The torch.jit.script decorator takes a module or function...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.jit.script right now does not necessarily to be a decorator right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like you can call it scripted_model = torch.jit.script(model), which is not a decorator.

# ``torch.jit.script`` function takes module or function and does not
# requires example inputs. Scripting then explicitly converts the module
# or function code to TorchScript, including all possible control flow
# routes. The one caveat with using scripting is that it only supports
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably "all control flow" instead of "all possible control flow routes"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I think "One caveat with using scripting" is a bit more natural than "The one caveat..."

# ~~~~~~~~~~~~~~~~~~~~~~
#
# Similarly to the ``EncoderRNN``, this module does not contain any
# Similarly to the ``EncoderRNN```, this module does not contain any
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the extra ` on purpose?

@wanchaol wanchaol force-pushed the seq2seq branch 2 times, most recently from ecbeae4 to 88bda68 Compare July 19, 2019 22:53
# -*- coding: utf-8 -*-
"""
Deploying a Seq2Seq Model with the Hybrid Frontend
Deploying a Seq2Seq Model with the TorchScript
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with the Torchscript

@suo
Copy link
Member

suo commented Aug 7, 2019

This looks good to me, but needs a rebase @wanchaol

@wanchaol wanchaol force-pushed the seq2seq branch 3 times, most recently from a2e366c to f5bce02 Compare August 7, 2019 20:03
brianjo and others added 19 commits August 7, 2019 15:01
* Update requirements.txt

Pinning sphinx gallery. Step one.
Please don't merge.

* Update build.sh

removed audio, audio sample, added || true to line 39.

* Update build.sh

uninstall/install  awscli

* Update build.sh

Trying to unpin Sphinx from ==1.8.2.

* Update build.sh

Pinning sphinx-gallery.

* Update build.sh

repining sphinx trying pip install

* Update build.sh

removed -y option

* Update build.sh

* Update build.sh
Updated the Hybrid frontend section to link to JIT/Torchscript.
Some spelling errors fixed..

"That looks waaay better than chance"
"Because your network is realllly small"
Corrected to CHW not HWC (3x36x36)
Updated some code to remove eval in the code.
Removed old ONNX tutorial from TOC.
Added torchaudio from source.
Missed a #.
Ubuntu and others added 26 commits August 7, 2019 15:03
updating for no version number
If you don't configure this string of code, you will get an error when you iterate over the update from 4000_checkpoint.tar:

```
encoder_optimizer.step()  
```


Error message:

```
exp_avg.mul_(beta1).add_(1 - beta1, grad)
RuntimeError: expected backend CPU and dtype Float but got backend CUDA and dtype Float
```


Fix it: pytorch/pytorch#2830

```
with torch.no_grad():
    correct = 0
    total = 0
    for images, labels in test_loader:
        images = images.to(device)  # missing line from original code
        labels = labels.to(device)  # missing line from original code
        images = images.reshape(-1, 28 * 28)
        out = model(images)
        _, predicted = torch.max(out.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
```
If you don't configure this string of code, you will get an error when you iterate over the update from 4000_checkpoint.tar:

```
encoder_optimizer.step()  
```

Error message:

```
exp_avg.mul_(beta1).add_(1 - beta1, grad)
RuntimeError: expected backend CPU and dtype Float but got backend CUDA and dtype Float
```

Fix it: pytorch/pytorch#2830

```
model = Model()
model.load_state_dict(checkpoint['model'])
model.cuda()
optimizer = optim.Adam(model.parameters())
optimizer.load_state_dict(checkpoint['optimizer'])
for state in optimizer.state.values():
    for k, v in state.items():
        if isinstance(v, torch.Tensor):
            state[k] = v.cuda()
```
Tensorboard dependency.
@wanchaol
Copy link
Contributor Author

wanchaol commented Aug 7, 2019

close this in favor of #597

@wanchaol wanchaol closed this Aug 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.