-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Update Deploy Seq2Seq Tutorial with New TorchScript API #565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Deploy preview for pytorch-tutorials-preview ready! Built with commit 9fa7632 https://deploy-preview-565--pytorch-tutorials-preview.netlify.com |
Chillee
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some notes/typos.
| # training. | ||
| # | ||
| # What is the Hybrid Frontend? | ||
| # What is the TorchScript? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is Torchscript?
| # :align: center | ||
| # :alt: workflow | ||
| # control flow, a **scripting** mechanism is provided. The | ||
| # ``torch.jit.script`` function takes module or function and does not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The torch.jit.script decorator takes a module or function...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch.jit.script right now does not necessarily to be a decorator right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
like you can call it scripted_model = torch.jit.script(model), which is not a decorator.
| # ``torch.jit.script`` function takes module or function and does not | ||
| # requires example inputs. Scripting then explicitly converts the module | ||
| # or function code to TorchScript, including all possible control flow | ||
| # routes. The one caveat with using scripting is that it only supports |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably "all control flow" instead of "all possible control flow routes"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think "One caveat with using scripting" is a bit more natural than "The one caveat..."
| # ~~~~~~~~~~~~~~~~~~~~~~ | ||
| # | ||
| # Similarly to the ``EncoderRNN``, this module does not contain any | ||
| # Similarly to the ``EncoderRNN```, this module does not contain any |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the extra ` on purpose?
ecbeae4 to
88bda68
Compare
| # -*- coding: utf-8 -*- | ||
| """ | ||
| Deploying a Seq2Seq Model with the Hybrid Frontend | ||
| Deploying a Seq2Seq Model with the TorchScript |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the Torchscript
|
This looks good to me, but needs a rebase @wanchaol |
a2e366c to
f5bce02
Compare
* Update requirements.txt Pinning sphinx gallery. Step one. Please don't merge. * Update build.sh removed audio, audio sample, added || true to line 39. * Update build.sh uninstall/install awscli * Update build.sh Trying to unpin Sphinx from ==1.8.2. * Update build.sh Pinning sphinx-gallery. * Update build.sh repining sphinx trying pip install * Update build.sh removed -y option * Update build.sh * Update build.sh
Updated the Hybrid frontend section to link to JIT/Torchscript.
Some spelling errors fixed.. "That looks waaay better than chance" "Because your network is realllly small"
Corrected to CHW not HWC (3x36x36)
Updated some code to remove eval in the code.
Removed old ONNX tutorial from TOC.
Added torchaudio from source.
Missed a #.
Edited sentence.
updating for no version number
If you don't configure this string of code, you will get an error when you iterate over the update from 4000_checkpoint.tar: ``` encoder_optimizer.step() ``` Error message: ``` exp_avg.mul_(beta1).add_(1 - beta1, grad) RuntimeError: expected backend CPU and dtype Float but got backend CUDA and dtype Float ``` Fix it: pytorch/pytorch#2830 ``` with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) # missing line from original code labels = labels.to(device) # missing line from original code images = images.reshape(-1, 28 * 28) out = model(images) _, predicted = torch.max(out.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() ```
If you don't configure this string of code, you will get an error when you iterate over the update from 4000_checkpoint.tar: ``` encoder_optimizer.step() ``` Error message: ``` exp_avg.mul_(beta1).add_(1 - beta1, grad) RuntimeError: expected backend CPU and dtype Float but got backend CUDA and dtype Float ``` Fix it: pytorch/pytorch#2830 ``` model = Model() model.load_state_dict(checkpoint['model']) model.cuda() optimizer = optim.Adam(model.parameters()) optimizer.load_state_dict(checkpoint['optimizer']) for state in optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.cuda() ```
Tensorboard dependency.
|
close this in favor of #597 |
No description provided.