Skip to content

Commit 4cb2f87

Browse files
author
Caroline Chen
authored
Merge branch 'master' into audio_tutorial_1.9
2 parents 40b82e5 + 7248f4f commit 4cb2f87

File tree

5 files changed

+209
-37
lines changed

5 files changed

+209
-37
lines changed

.circleci/config.yml

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -116,8 +116,6 @@ pytorch_tutorial_build_defaults: &pytorch_tutorial_build_defaults
116116
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_ECR_READ_ONLY}
117117
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_ECR_READ_ONLY}
118118
eval $(aws ecr get-login --region us-east-1 --no-include-email)
119-
- restore_cache:
120-
key: v1.0-tutorial-{{ .Environment.CIRCLE_JOB }}
121119
- run:
122120
name: Build
123121
no_output_timeout: "20h"
@@ -166,35 +164,11 @@ pytorch_tutorial_build_defaults: &pytorch_tutorial_build_defaults
166164
fi
167165
set -x
168166
169-
# This also copies the cached build to docker.
170167
docker cp /home/circleci/project/. "$id:/var/lib/jenkins/workspace"
171168
172169
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && ./ci_build_script.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
173170
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
174171
175-
# Copy the last build from docker
176-
docker cp "$id:/var/lib/jenkins/workspace/_build" /home/circleci/project
177-
docker cp "$id:/var/lib/jenkins/workspace/docs" /home/circleci/project
178-
docker cp "$id:/var/lib/jenkins/workspace/advanced" /home/circleci/project
179-
docker cp "$id:/var/lib/jenkins/workspace/beginner" /home/circleci/project
180-
docker cp "$id:/var/lib/jenkins/workspace/intermediate" /home/circleci/project
181-
docker cp "$id:/var/lib/jenkins/workspace/prototype" /home/circleci/project
182-
docker cp "$id:/var/lib/jenkins/workspace/recipes" /home/circleci/project
183-
docker cp "$id:/var/lib/jenkins/workspace/src" /home/circleci/project
184-
185-
- save_cache:
186-
# Save to cache for incremental build
187-
key: v1.0-tutorial-{{ .Environment.CIRCLE_JOB }}
188-
paths:
189-
- /home/circleci/project/_build
190-
- /home/circleci/project/docs
191-
- /home/circleci/project/advanced
192-
- /home/circleci/project/beginner
193-
- /home/circleci/project/intermediate
194-
- /home/circleci/project/prototype
195-
- /home/circleci/project/recipes
196-
- /home/circleci/project/src
197-
198172
pytorch_tutorial_build_worker_defaults: &pytorch_tutorial_build_worker_defaults
199173
environment:
200174
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7"

prototype_source/prototype_index.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -91,13 +91,6 @@ Prototype features are not available as part of binary distributions like PyPI o
9191
:link: ../prototype/vulkan_workflow.html
9292
:tags: Mobile
9393

94-
.. customcarditem::
95-
:header: Lite Interpreter Workflow in Android and iOS
96-
:card_description: Learn how to use the lite interpreter on iOS and Andriod devices.
97-
:image: ../_static/img/thumbnails/cropped/mobile.png
98-
:link: ../prototype/lite_interpreter.html
99-
:tags: Mobile
100-
10194
.. TorchScript
10295
10396
.. customcarditem::
@@ -144,4 +137,3 @@ Prototype features are not available as part of binary distributions like PyPI o
144137
prototype/torchscript_freezing.html
145138
prototype/vmap_recipe.html
146139
prototype/vulkan_workflow.html
147-
prototype/lite_interpreter.html
Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
(beta) Efficient mobile interpreter in Android and iOS
2+
==================================================================
3+
4+
**Author**: `Chen Lai <https://github.com/cccclai>`_, `Martin Yuan <https://github.com/iseeyuan>`_
5+
6+
Introduction
7+
------------
8+
9+
This tutorial introduces the steps to use PyTorch's efficient interpreter on iOS and Android. We will be using an Image Segmentation demo application as an example.
10+
11+
This application will take advantage of the pre-built interpreter libraries available for Android and iOS, which can be used directly with Maven (Android) and CocoaPods (iOS). It is important to note that the pre-built libraries are the available for simplicity, but further size optimization can be achieved with by utilizing PyTorch's custom build capabilities.
12+
13+
.. note:: If you see the error message: `PytorchStreamReader failed locating file bytecode.pkl: file not found ()`, likely you are using a torch script model that requires the use of the PyTorch JIT interpreter (a version of our PyTorch interpreter that is not as size-efficient). In order to leverage our efficient interpreter, please regenerate the model by running: `module._save_for_lite_interpreter(${model_path})`.
14+
15+
- If `bytecode.pkl` is missing, likely the model is generated with the api: `module.save(${model_psth})`.
16+
- The api `_load_for_lite_interpreter(${model_psth})` can be helpful to validate model with the efficient mobile interpreter.
17+
18+
Android
19+
-------------------
20+
Get the Image Segmentation demo app in Android: https://github.com/pytorch/android-demo-app/tree/master/ImageSegmentation
21+
22+
1. **Prepare model**: Prepare the mobile interpreter version of model by run the script below to generate the scripted model `deeplabv3_scripted.pt` and `deeplabv3_scripted.ptl`
23+
24+
.. code:: python
25+
26+
import torch
27+
from torch.utils.mobile_optimizer import optimize_for_mobile
28+
model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
29+
model.eval()
30+
31+
scripted_module = torch.jit.script(model)
32+
# Export full jit version model (not compatible mobile interpreter), leave it here for comparison
33+
scripted_module.save("deeplabv3_scripted.pt")
34+
# Export mobile interpreter version model (compatible with mobile interpreter)
35+
optimized_scripted_module = optimize_for_mobile(scripted_module)
36+
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")
37+
38+
2. **Use the PyTorch Android library in the ImageSegmentation app**: Update the `dependencies` part of ``ImageSegmentation/app/build.gradle`` to
39+
40+
.. code:: gradle
41+
42+
repositories {
43+
maven {
44+
url "https://oss.sonatype.org/content/repositories/snapshots"
45+
}
46+
}
47+
48+
dependencies {
49+
implementation 'androidx.appcompat:appcompat:1.2.0'
50+
implementation 'androidx.constraintlayout:constraintlayout:2.0.2'
51+
testImplementation 'junit:junit:4.12'
52+
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
53+
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
54+
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
55+
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
56+
57+
implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3'
58+
}
59+
60+
61+
62+
3. **Update model loader api**: Update ``ImageSegmentation/app/src/main/java/org/pytorch/imagesegmentation/MainActivity.java`` by
63+
64+
4.1 Add new import: `import org.pytorch.LiteModuleLoader`
65+
66+
4.2 Replace the way to load pytorch lite model
67+
68+
.. code:: java
69+
70+
// mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.pt"));
71+
mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.ptl"));
72+
73+
4. **Test app**: Build and run the `ImageSegmentation` app in Android Studio
74+
75+
iOS
76+
-------------------
77+
Get ImageSegmentation demo app in iOS: https://github.com/pytorch/ios-demo-app/tree/master/ImageSegmentation
78+
79+
1. **Prepare model**: Same as Android.
80+
81+
2. **Build the project with Cocoapods and prebuilt interpreter** Update the `PodFile` and run `pod install`:
82+
83+
.. code-block:: podfile
84+
85+
target 'ImageSegmentation' do
86+
# Comment the next line if you don't want to use dynamic frameworks
87+
use_frameworks!
88+
89+
# Pods for ImageSegmentation
90+
pod 'LibTorch_Lite', '~>1.9.0'
91+
end
92+
93+
3. **Update library and API**
94+
95+
3.1 Update ``TorchModule.mm``: To use the custom built libraries project, use `<Libtorch-Lite/Libtorch-Lite.h>` (in ``TorchModule.mm``):
96+
97+
.. code-block:: swift
98+
99+
#import <Libtorch-Lite/Libtorch-Lite.h>
100+
// If it's built from source with xcode, comment out the line above
101+
// and use following headers
102+
// #include <torch/csrc/jit/mobile/import.h>
103+
// #include <torch/csrc/jit/mobile/module.h>
104+
// #include <torch/script.h>
105+
106+
.. code-block:: swift
107+
108+
@implementation TorchModule {
109+
@protected
110+
// torch::jit::script::Module _impl;
111+
torch::jit::mobile::Module _impl;
112+
}
113+
114+
- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
115+
self = [super init];
116+
if (self) {
117+
try {
118+
_impl = torch::jit::_load_for_mobile(filePath.UTF8String);
119+
// _impl = torch::jit::load(filePath.UTF8String);
120+
// _impl.eval();
121+
} catch (const std::exception& exception) {
122+
NSLog(@"%s", exception.what());
123+
return nil;
124+
}
125+
}
126+
return self;
127+
}
128+
129+
3.2 Update ``ViewController.swift``
130+
131+
.. code-block:: swift
132+
133+
// if let filePath = Bundle.main.path(forResource:
134+
// "deeplabv3_scripted", ofType: "pt"),
135+
// let module = TorchModule(fileAtPath: filePath) {
136+
// return module
137+
// } else {
138+
// fatalError("Can't find the model file!")
139+
// }
140+
if let filePath = Bundle.main.path(forResource:
141+
"deeplabv3_scripted", ofType: "ptl"),
142+
let module = TorchModule(fileAtPath: filePath) {
143+
return module
144+
} else {
145+
fatalError("Can't find the model file!")
146+
}
147+
148+
4. Build and test the app in Xcode.
149+
150+
How to use mobile interpreter + custom build
151+
------------------------------------------
152+
A custom PyTorch interpreter library can be created to reduce binary size, by only containing the operators needed by the model. In order to do that follow these steps:
153+
154+
1. To dump the operators in your model, say `deeplabv3_scripted`, run the following lines of Python code:
155+
156+
.. code-block:: python
157+
158+
# Dump list of operators used by deeplabv3_scripted:
159+
import torch, yaml
160+
model = torch.jit.load('deeplabv3_scripted.ptl')
161+
ops = torch.jit.export_opnames(model)
162+
with open('deeplabv3_scripted.yaml', 'w') as output:
163+
yaml.dump(ops, output)
164+
165+
In the snippet above, you first need to load the ScriptModule. Then, use export_opnames to return a list of operator names of the ScriptModule and its submodules. Lastly, save the result in a yaml file. The yaml file can be generated for any PyTorch 1.4.0 or above version. You can do that by checking the value of `torch.__version__`.
166+
167+
2. To run the build script locally with the prepared yaml list of operators, pass in the yaml file generate from the last step into the environment variable SELECTED_OP_LIST. Also in the arguments, specify BUILD_PYTORCH_MOBILE=1 as well as the platform/architechture type.
168+
169+
**iOS**: Take the simulator build for example, the command should be:
170+
171+
.. code-block:: bash
172+
173+
SELECTED_OP_LIST=deeplabv3_scripted.yaml BUILD_PYTORCH_MOBILE=1 IOS_PLATFORM=SIMULATOR ./scripts/build_ios.sh
174+
175+
**Android**: Take the x86 build for example, the command should be:
176+
177+
.. code-block:: bash
178+
179+
SELECTED_OP_LIST=deeplabv3_scripted.yaml ./scripts/build_pytorch_android.sh x86
180+
181+
182+
183+
Conclusion
184+
----------
185+
186+
In this tutorial, we demonstrated how to use PyTorch's efficient mobile interpreter, in an Android and iOS app.
187+
188+
We walked through an Image Segmentation example to show how to dump the model, build a custom torch library from source and use the new api to run model.
189+
190+
Our efficient mobile interpreter is still under development, and we will continue improving its size in the future. Note, however, that the APIs are subject to change in future versions.
191+
192+
Thanks for reading! As always, we welcome any feedback, so please create an issue `here <https://github.com/pytorch/pytorch/issues>` - if you have any.
193+
194+
Learn More
195+
----------
196+
197+
- To learn more about PyTorch Mobile, please refer to `PyTorch Mobile Home Page <https://pytorch.org/mobile/home/>`_
198+
- To learn more about Image Segmentation, please refer to the `Image Segmentation DeepLabV3 on Android Recipe <https://pytorch.org/tutorials/beginner/deeplabv3_on_android.html>`_

recipes_source/mobile_perf.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ Now we are ready to benchmark your model:
248248
iOS - Benchmarking Setup
249249
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
250250

251-
For iOS, we'll be using our `TestApp <https://github.com/pytorch/pytorch/tree/master/ios/TestApp>`_ as the benchmarking tool.
251+
For iOS, we'll be using our `TestApp <https://github.com/pytorch/pytorch/tree/master/ios/TestApp>`_ as the benchmarking tool.
252252

253253
To begin with, let's apply the ``optimize_for_mobile`` method to our python script located at `TestApp/benchmark/trace_model.py <https://github.com/pytorch/pytorch/blob/master/ios/TestApp/benchmark/trace_model.py>`_. Simply modify the code as below.
254254

@@ -265,15 +265,15 @@ To begin with, let's apply the ``optimize_for_mobile`` method to our python scri
265265
torchscript_model_optimized = optimize_for_mobile(traced_script_module)
266266
torch.jit.save(torchscript_model_optimized, "model.pt")
267267

268-
Now let's run ``python trace_model.py``. If everything works well, we should be able to generate our optimized model in the benchmark directory.
268+
Now let's run ``python trace_model.py``. If everything works well, we should be able to generate our optimized model in the benchmark directory.
269269

270270
Next, we're going to build the PyTorch libraries from source.
271271

272272
::
273273

274274
BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 ./scripts/build_ios.sh
275275

276-
Now that we have the optimized model and PyTorch ready, it's time to generate our XCode project and do benchmarking. To do that, we'll be using a ruby script - `setup.rb` which does the heavy lifting jobs of setting up the XCode project.
276+
Now that we have the optimized model and PyTorch ready, it's time to generate our XCode project and do benchmarking. To do that, we'll be using a ruby script - `setup.rb` which does the heavy lifting jobs of setting up the XCode project.
277277

278278
::
279279

recipes_source/recipes_index.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -207,6 +207,13 @@ Recipes are bite-sized, actionable examples of how to use specific PyTorch featu
207207
:link: ../recipes/model_preparation_android.html
208208
:tags: Mobile
209209

210+
.. customcarditem::
211+
:header: Mobile Interpreter Workflow in Android and iOS
212+
:card_description: Learn how to use the mobile interpreter on iOS and Andriod devices.
213+
:image: ../_static/img/thumbnails/cropped/mobile.png
214+
:link: ../recipes/mobile_interpreter.html
215+
:tags: Mobile
216+
210217
.. customcarditem::
211218
:header: Profiling PyTorch RPC-Based Workloads
212219
:card_description: How to use the PyTorch profiler to profile RPC-based workloads.
@@ -294,3 +301,4 @@ Recipes are bite-sized, actionable examples of how to use specific PyTorch featu
294301
/recipes/zero_redundancy_optimizer
295302
/recipes/cuda_rpc
296303
/recipes/distributed_optim_torchscript
304+
/recipes/mobile_interpreter

0 commit comments

Comments
 (0)