|
| 1 | +(beta) Efficient mobile interpreter in Android and iOS |
| 2 | +================================================================== |
| 3 | + |
| 4 | +**Author**: `Chen Lai <https://github.com/cccclai>`_, `Martin Yuan <https://github.com/iseeyuan>`_ |
| 5 | + |
| 6 | +Introduction |
| 7 | +------------ |
| 8 | + |
| 9 | +This tutorial introduces the steps to use PyTorch's efficient interpreter on iOS and Android. We will be using an Image Segmentation demo application as an example. |
| 10 | + |
| 11 | +This application will take advantage of the pre-built interpreter libraries available for Android and iOS, which can be used directly with Maven (Android) and CocoaPods (iOS). It is important to note that the pre-built libraries are the available for simplicity, but further size optimization can be achieved with by utilizing PyTorch's custom build capabilities. |
| 12 | + |
| 13 | +.. note:: If you see the error message: `PytorchStreamReader failed locating file bytecode.pkl: file not found ()`, likely you are using a torch script model that requires the use of the PyTorch JIT interpreter (a version of our PyTorch interpreter that is not as size-efficient). In order to leverage our efficient interpreter, please regenerate the model by running: `module._save_for_lite_interpreter(${model_path})`. |
| 14 | + |
| 15 | + - If `bytecode.pkl` is missing, likely the model is generated with the api: `module.save(${model_psth})`. |
| 16 | + - The api `_load_for_lite_interpreter(${model_psth})` can be helpful to validate model with the efficient mobile interpreter. |
| 17 | + |
| 18 | +Android |
| 19 | +------------------- |
| 20 | +Get the Image Segmentation demo app in Android: https://github.com/pytorch/android-demo-app/tree/master/ImageSegmentation |
| 21 | + |
| 22 | +1. **Prepare model**: Prepare the mobile interpreter version of model by run the script below to generate the scripted model `deeplabv3_scripted.pt` and `deeplabv3_scripted.ptl` |
| 23 | + |
| 24 | +.. code:: python |
| 25 | +
|
| 26 | + import torch |
| 27 | + from torch.utils.mobile_optimizer import optimize_for_mobile |
| 28 | + model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True) |
| 29 | + model.eval() |
| 30 | +
|
| 31 | + scripted_module = torch.jit.script(model) |
| 32 | + # Export full jit version model (not compatible mobile interpreter), leave it here for comparison |
| 33 | + scripted_module.save("deeplabv3_scripted.pt") |
| 34 | + # Export mobile interpreter version model (compatible with mobile interpreter) |
| 35 | + optimized_scripted_module = optimize_for_mobile(scripted_module) |
| 36 | + optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl") |
| 37 | +
|
| 38 | +2. **Use the PyTorch Android library in the ImageSegmentation app**: Update the `dependencies` part of ``ImageSegmentation/app/build.gradle`` to |
| 39 | + |
| 40 | +.. code:: gradle |
| 41 | +
|
| 42 | + repositories { |
| 43 | + maven { |
| 44 | + url "https://oss.sonatype.org/content/repositories/snapshots" |
| 45 | + } |
| 46 | + } |
| 47 | +
|
| 48 | + dependencies { |
| 49 | + implementation 'androidx.appcompat:appcompat:1.2.0' |
| 50 | + implementation 'androidx.constraintlayout:constraintlayout:2.0.2' |
| 51 | + testImplementation 'junit:junit:4.12' |
| 52 | + androidTestImplementation 'androidx.test.ext:junit:1.1.2' |
| 53 | + androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0' |
| 54 | + implementation 'org.pytorch:pytorch_android_lite:1.9.0' |
| 55 | + implementation 'org.pytorch:pytorch_android_torchvision:1.9.0' |
| 56 | +
|
| 57 | + implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3' |
| 58 | + } |
| 59 | +
|
| 60 | +
|
| 61 | +
|
| 62 | +3. **Update model loader api**: Update ``ImageSegmentation/app/src/main/java/org/pytorch/imagesegmentation/MainActivity.java`` by |
| 63 | + |
| 64 | + 4.1 Add new import: `import org.pytorch.LiteModuleLoader` |
| 65 | + |
| 66 | + 4.2 Replace the way to load pytorch lite model |
| 67 | + |
| 68 | +.. code:: java |
| 69 | +
|
| 70 | + // mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.pt")); |
| 71 | + mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.ptl")); |
| 72 | +
|
| 73 | +4. **Test app**: Build and run the `ImageSegmentation` app in Android Studio |
| 74 | + |
| 75 | +iOS |
| 76 | +------------------- |
| 77 | +Get ImageSegmentation demo app in iOS: https://github.com/pytorch/ios-demo-app/tree/master/ImageSegmentation |
| 78 | + |
| 79 | +1. **Prepare model**: Same as Android. |
| 80 | + |
| 81 | +2. **Build the project with Cocoapods and prebuilt interpreter** Update the `PodFile` and run `pod install`: |
| 82 | + |
| 83 | +.. code-block:: podfile |
| 84 | +
|
| 85 | + target 'ImageSegmentation' do |
| 86 | + # Comment the next line if you don't want to use dynamic frameworks |
| 87 | + use_frameworks! |
| 88 | +
|
| 89 | + # Pods for ImageSegmentation |
| 90 | + pod 'LibTorch_Lite', '~>1.9.0' |
| 91 | + end |
| 92 | +
|
| 93 | +3. **Update library and API** |
| 94 | + |
| 95 | + 3.1 Update ``TorchModule.mm``: To use the custom built libraries project, use `<Libtorch-Lite/Libtorch-Lite.h>` (in ``TorchModule.mm``): |
| 96 | + |
| 97 | +.. code-block:: swift |
| 98 | +
|
| 99 | + #import <Libtorch-Lite/Libtorch-Lite.h> |
| 100 | + // If it's built from source with xcode, comment out the line above |
| 101 | + // and use following headers |
| 102 | + // #include <torch/csrc/jit/mobile/import.h> |
| 103 | + // #include <torch/csrc/jit/mobile/module.h> |
| 104 | + // #include <torch/script.h> |
| 105 | +
|
| 106 | +.. code-block:: swift |
| 107 | +
|
| 108 | + @implementation TorchModule { |
| 109 | + @protected |
| 110 | + // torch::jit::script::Module _impl; |
| 111 | + torch::jit::mobile::Module _impl; |
| 112 | + } |
| 113 | +
|
| 114 | + - (nullable instancetype)initWithFileAtPath:(NSString*)filePath { |
| 115 | + self = [super init]; |
| 116 | + if (self) { |
| 117 | + try { |
| 118 | + _impl = torch::jit::_load_for_mobile(filePath.UTF8String); |
| 119 | + // _impl = torch::jit::load(filePath.UTF8String); |
| 120 | + // _impl.eval(); |
| 121 | + } catch (const std::exception& exception) { |
| 122 | + NSLog(@"%s", exception.what()); |
| 123 | + return nil; |
| 124 | + } |
| 125 | + } |
| 126 | + return self; |
| 127 | + } |
| 128 | +
|
| 129 | +3.2 Update ``ViewController.swift`` |
| 130 | + |
| 131 | +.. code-block:: swift |
| 132 | +
|
| 133 | + // if let filePath = Bundle.main.path(forResource: |
| 134 | + // "deeplabv3_scripted", ofType: "pt"), |
| 135 | + // let module = TorchModule(fileAtPath: filePath) { |
| 136 | + // return module |
| 137 | + // } else { |
| 138 | + // fatalError("Can't find the model file!") |
| 139 | + // } |
| 140 | + if let filePath = Bundle.main.path(forResource: |
| 141 | + "deeplabv3_scripted", ofType: "ptl"), |
| 142 | + let module = TorchModule(fileAtPath: filePath) { |
| 143 | + return module |
| 144 | + } else { |
| 145 | + fatalError("Can't find the model file!") |
| 146 | + } |
| 147 | +
|
| 148 | +4. Build and test the app in Xcode. |
| 149 | + |
| 150 | +How to use mobile interpreter + custom build |
| 151 | +------------------------------------------ |
| 152 | +A custom PyTorch interpreter library can be created to reduce binary size, by only containing the operators needed by the model. In order to do that follow these steps: |
| 153 | + |
| 154 | +1. To dump the operators in your model, say `deeplabv3_scripted`, run the following lines of Python code: |
| 155 | + |
| 156 | +.. code-block:: python |
| 157 | +
|
| 158 | + # Dump list of operators used by deeplabv3_scripted: |
| 159 | + import torch, yaml |
| 160 | + model = torch.jit.load('deeplabv3_scripted.ptl') |
| 161 | + ops = torch.jit.export_opnames(model) |
| 162 | + with open('deeplabv3_scripted.yaml', 'w') as output: |
| 163 | + yaml.dump(ops, output) |
| 164 | +
|
| 165 | +In the snippet above, you first need to load the ScriptModule. Then, use export_opnames to return a list of operator names of the ScriptModule and its submodules. Lastly, save the result in a yaml file. The yaml file can be generated for any PyTorch 1.4.0 or above version. You can do that by checking the value of `torch.__version__`. |
| 166 | + |
| 167 | +2. To run the build script locally with the prepared yaml list of operators, pass in the yaml file generate from the last step into the environment variable SELECTED_OP_LIST. Also in the arguments, specify BUILD_PYTORCH_MOBILE=1 as well as the platform/architechture type. |
| 168 | + |
| 169 | +**iOS**: Take the simulator build for example, the command should be: |
| 170 | + |
| 171 | +.. code-block:: bash |
| 172 | +
|
| 173 | + SELECTED_OP_LIST=deeplabv3_scripted.yaml BUILD_PYTORCH_MOBILE=1 IOS_PLATFORM=SIMULATOR ./scripts/build_ios.sh |
| 174 | +
|
| 175 | +**Android**: Take the x86 build for example, the command should be: |
| 176 | + |
| 177 | +.. code-block:: bash |
| 178 | +
|
| 179 | + SELECTED_OP_LIST=deeplabv3_scripted.yaml ./scripts/build_pytorch_android.sh x86 |
| 180 | +
|
| 181 | +
|
| 182 | +
|
| 183 | +Conclusion |
| 184 | +---------- |
| 185 | + |
| 186 | +In this tutorial, we demonstrated how to use PyTorch's efficient mobile interpreter, in an Android and iOS app. |
| 187 | + |
| 188 | +We walked through an Image Segmentation example to show how to dump the model, build a custom torch library from source and use the new api to run model. |
| 189 | + |
| 190 | +Our efficient mobile interpreter is still under development, and we will continue improving its size in the future. Note, however, that the APIs are subject to change in future versions. |
| 191 | + |
| 192 | +Thanks for reading! As always, we welcome any feedback, so please create an issue `here <https://github.com/pytorch/pytorch/issues>` - if you have any. |
| 193 | + |
| 194 | +Learn More |
| 195 | +---------- |
| 196 | + |
| 197 | +- To learn more about PyTorch Mobile, please refer to `PyTorch Mobile Home Page <https://pytorch.org/mobile/home/>`_ |
| 198 | +- To learn more about Image Segmentation, please refer to the `Image Segmentation DeepLabV3 on Android Recipe <https://pytorch.org/tutorials/beginner/deeplabv3_on_android.html>`_ |
0 commit comments