Skip to content

Commit 3772be0

Browse files
authored
Merge branch 'main' into export-D65911233
2 parents d90acea + eae0b04 commit 3772be0

File tree

8 files changed

+265
-20
lines changed

8 files changed

+265
-20
lines changed

.github/scripts/check_labels.py

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
#!/usr/bin/env python3
2+
"""Check whether a PR has required labels."""
3+
4+
import sys
5+
from typing import Any
6+
7+
from github_utils import gh_delete_comment, gh_post_pr_comment
8+
from gitutils import get_git_remote_name, get_git_repo_dir, GitRepo
9+
from label_utils import has_required_labels, is_label_err_comment, LABEL_ERR_MSG
10+
from trymerge import GitHubPR
11+
12+
13+
def delete_all_label_err_comments(pr: "GitHubPR") -> None:
14+
for comment in pr.get_comments():
15+
if is_label_err_comment(comment):
16+
gh_delete_comment(pr.org, pr.project, comment.database_id)
17+
18+
19+
def add_label_err_comment(pr: "GitHubPR") -> None:
20+
# Only make a comment if one doesn't exist already
21+
if not any(is_label_err_comment(comment) for comment in pr.get_comments()):
22+
gh_post_pr_comment(pr.org, pr.project, pr.pr_num, LABEL_ERR_MSG)
23+
24+
25+
def parse_args() -> Any:
26+
from argparse import ArgumentParser
27+
28+
parser = ArgumentParser("Check PR labels")
29+
parser.add_argument("pr_num", type=int)
30+
# add a flag to return a non-zero exit code if the PR does not have the required labels
31+
parser.add_argument(
32+
"--exit-non-zero",
33+
action="store_true",
34+
help="Return a non-zero exit code if the PR does not have the required labels",
35+
)
36+
37+
return parser.parse_args()
38+
39+
40+
def main() -> None:
41+
args = parse_args()
42+
repo = GitRepo(get_git_repo_dir(), get_git_remote_name())
43+
org, project = repo.gh_owner_and_name()
44+
pr = GitHubPR(org, project, args.pr_num)
45+
46+
try:
47+
if not has_required_labels(pr):
48+
print(LABEL_ERR_MSG)
49+
add_label_err_comment(pr)
50+
if args.exit_non_zero:
51+
sys.exit(1)
52+
else:
53+
delete_all_label_err_comments(pr)
54+
except Exception as e:
55+
if args.exit_non_zero:
56+
sys.exit(1)
57+
58+
sys.exit(0)
59+
60+
61+
if __name__ == "__main__":
62+
main()

.github/workflows/check-labels.yml

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
name: Check Labels
2+
3+
on:
4+
# We need pull_request_target to be able to post comments on PRs from forks.
5+
# Only allow pull_request_target when merging to main, not some historical branch.
6+
#
7+
# Make sure to don't introduce explicit checking out and installing/running
8+
# untrusted user code into this workflow!
9+
pull_request_target:
10+
types: [opened, synchronize, reopened, labeled, unlabeled]
11+
branches: [main]
12+
13+
# To check labels on ghstack PRs.
14+
# Note: as pull_request doesn't trigger on PRs targeting main,
15+
# to test changes to the workflow itself one needs to create
16+
# a PR that targets a gh/**/base branch.
17+
pull_request:
18+
types: [opened, synchronize, reopened, labeled, unlabeled]
19+
branches: [gh/**/base]
20+
21+
workflow_dispatch:
22+
inputs:
23+
pr_number:
24+
description: 'PR number to check labels for'
25+
required: true
26+
27+
concurrency:
28+
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
29+
cancel-in-progress: true
30+
31+
jobs:
32+
check-labels:
33+
permissions:
34+
contents: read
35+
pull-requests: write
36+
name: Check labels
37+
if: github.repository_owner == 'pytorch'
38+
runs-on: ubuntu-22.04
39+
steps:
40+
- uses: actions/checkout@v3
41+
with:
42+
fetch-depth: 0
43+
- uses: actions/setup-python@v4
44+
with:
45+
python-version: '3.10'
46+
# Not the direct dependencies but the script uses trymerge
47+
- run: pip install pyyaml==6.0 rockset==1.0.3
48+
- name: Check labels
49+
env:
50+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
51+
PR_NUM: ${{ github.event.number || github.event.inputs.pr_number }}
52+
run: |
53+
set -ex
54+
python3 .github/scripts/check_labels.py --exit-non-zero "${PR_NUM}"
Lines changed: 22 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,35 @@
1-
# Build Instructions
1+
# MediaTek Backend on ExecuTorch
2+
MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices that equips with MediaTek Neuron Processing Unit (NPU). This document offers a step-by-step guide to set up the build environment for the MediaTek ExecuTorch libraries.
23

3-
This document provides a step-by-step guide to set up the build environment for the MediaTek ExercuTorch libraries.
4+
## Supported Chips
45

5-
## Prerequisites
6+
The examples provided in this repository are tested and supported on the following MediaTek chip:
7+
8+
- MediaTek Dimensity 9300 (D9300)
9+
10+
## Build Instructions
11+
12+
### Prerequisites
613

714
Before you begin, ensure you have the following prerequisites installed and configured:
815

9-
### 1. Buck2 Build Tool
16+
#### 1. Buck2 Build Tool
1017

1118
- **Download Buck2**: Obtain Buck2 from the official [releases page](https://github.com/facebook/buck2/releases/tag/2024-02-01).
1219
- **Add to PATH**: Extract the downloaded file and add the directory to your system's `$PATH` environment variable.
1320
```bash
1421
export PATH=<path_to_buck>:$PATH
1522
```
1623

17-
### 2. Android NDK
24+
#### 2. Android NDK
1825

1926
- **Download Android NDK**: Acquire the Android NDK version 26.3.11579264 from the [Android developer site](https://developer.android.com/ndk/downloads).
2027
- **Set NDK Path**: Ensure that the `$ANDROID_NDK` environment variable is set to the path where the NDK is located.
2128
```bash
2229
export ANDROID_NDK=<path_to_android_ndk>
2330
```
2431

25-
### 3. MediaTek ExercuTorch Libraries
32+
#### 3. MediaTek ExecuTorch Libraries
2633

2734
Download [NeuroPilot Express SDK](https://neuropilot.mediatek.com/resources/public/npexpress/en/docs/npexpress) from MediaTek's NeuroPilot portal:
2835

@@ -31,11 +38,11 @@ Download [NeuroPilot Express SDK](https://neuropilot.mediatek.com/resources/publ
3138
- `mtk_converter-8.8.0.dev20240723+public.d1467db9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`: This library preprocess the model into a MediaTek representation.
3239
- `mtk_neuron-8.2.2-py3-none-linux_x86_64.whl`: This library converts the model to binaries.
3340

34-
## Setup
41+
### Setup
3542

3643
Follow the steps below to setup your build environment:
3744

38-
1. **Setup ExercuTorch Environment**: Refer to the [Setting up ExercuTorch](https://pytorch.org/executorch/stable/getting-started-setup) guide for detailed instructions on setting up the ExercuTorch environment.
45+
1. **Setup ExecuTorch Environment**: Refer to the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) guide for detailed instructions on setting up the ExecuTorch environment.
3946

4047
2. **Setup MediaTek Backend Environment**
4148
- Install the dependent libs. Ensure that you are inside backends/mediatek/ directory
@@ -52,18 +59,21 @@ Follow the steps below to setup your build environment:
5259
export NEURON_BUFFER_ALLOCATOR_LIB=<path_to_buffer_allocator>
5360
```
5461

55-
## Build
62+
### Build
63+
1. Navigate to `scripts/` directory.
5664

57-
1. **Build MediaTek Backend**: Once the prerequisites are in place, run the `mtk_build.sh` script to start the build process, MediaTek backend will be built under `cmake-android-out/backends/` as `libneuron_backend.so`
65+
2. **Build MediaTek Backend**: Once the prerequisites are in place, run the `mtk_build.sh` script to start the build process, MediaTek backend will be built under `cmake-android-out/backends/` as `libneuron_backend.so`
5866

5967
```bash
6068
./mtk_build.sh
6169
```
6270

63-
## Run
71+
### Run
6472

65-
1. **Push MediaTek universal SDK and MediaTek backend to the device**: push `libneuronusdk_adapter.mtk.so` and `libneuron_backend.so` to the phone and export it to the `$LD_LIBRARY_PATH` environment variable before executing ExercuTorch with MediaTek backend.
73+
1. **Push MediaTek universal SDK and MediaTek backend to the device**: push `libneuronusdk_adapter.mtk.so` and `libneuron_backend.so` to the phone and export it to the `$LD_LIBRARY_PATH` environment variable before executing ExecuTorch with MediaTek backend.
6674

6775
```bash
6876
export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH
6977
```
78+
79+
Please refer to `executorch/examples/mediatek/` for export and execution examples of various of models.

backends/xnnpack/test/ops/conv2d.py

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -394,3 +394,26 @@ def get_inputs(self):
394394
quant_config=get_symmetric_quantization_config(),
395395
conv_count=2,
396396
)
397+
398+
def test_qs8_conv2d_relu_multi_users(self):
399+
class Conv2dReluMultiUsers(torch.nn.Module):
400+
def __init__(self):
401+
super().__init__()
402+
self.conv1 = torch.nn.Conv2d(1, 1, 1)
403+
self.conv2 = torch.nn.Conv2d(1, 64, 1)
404+
self.relu = torch.nn.ReLU()
405+
406+
def forward(self, x):
407+
conv_default = self.conv1(x)
408+
y = self.relu(conv_default)
409+
conv_default_2 = self.conv2(y)
410+
return conv_default + conv_default_2
411+
412+
def get_inputs(self):
413+
return (torch.randn(1, 1, 1, 1),)
414+
415+
self._test(
416+
Conv2dReluMultiUsers(),
417+
quant_config=get_symmetric_quantization_config(),
418+
conv_count=2,
419+
)
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Building and Running ExecuTorch with MediaTek Backend
2+
3+
MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices that equips with MediaTek Neuron Processing Unit (NPU). This document offers a step-by-step guide to set up the build environment for the MediaTek ExecuTorch libraries.
4+
5+
::::{grid} 2
6+
:::{grid-item-card} What you will learn in this tutorial:
7+
:class-card: card-prerequisites
8+
* How to export and lower a PyTorch model ahead of time with ExecuTorch for MediaTek devices.
9+
* How to build MediaTek backend and examples.
10+
* How to deploy the exported models on device with ExecuTorch runtime.
11+
:::
12+
:::{grid-item-card} Tutorials we recommend you complete before this:
13+
:class-card: card-prerequisites
14+
* [Introduction to ExecuTorch](intro-how-it-works.md)
15+
* [Setting up ExecuTorch](getting-started-setup.md)
16+
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
17+
:::
18+
::::
19+
20+
21+
## Prerequisites (Hardware and Software)
22+
23+
### Host OS
24+
- Linux operating system
25+
26+
### Supported Chips:
27+
- MediaTek Dimensity 9300 (D9300)
28+
29+
### Software:
30+
31+
- [NeuroPilot Express SDK](https://neuropilot.mediatek.com/resources/public/npexpress/en/docs/npexpress) is a lightweight SDK for deploying AI applications on MediaTek SOC devices.
32+
33+
## Setting up your developer environment
34+
35+
Follow the steps below to setup your build environment:
36+
37+
1. **Setup ExecuTorch Environment**: Refer to the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) guide for detailed instructions on setting up the ExecuTorch environment.
38+
39+
2. **Setup MediaTek Backend Environment**
40+
- Install the dependent libs. Ensure that you are inside `backends/mediatek/` directory
41+
```bash
42+
pip3 install -r requirements.txt
43+
```
44+
- Install the two .whl downloaded from NeuroPilot Portal
45+
```bash
46+
pip3 install mtk_neuron-8.2.13-py3-none-linux_x86_64.whl
47+
pip3 install mtk_converter-8.9.1+public-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
48+
```
49+
- Set evironment variables for building backend
50+
```bash
51+
export NEURON_BUFFER_ALLOCATOR_LIB=<path_to_buffer_allocator.so>
52+
```
53+
54+
## Build
55+
56+
### Ahead of time:
57+
58+
**Exporting a PyTorch Model for MediaTek Backend**:
59+
1. Lower and export the `.pte` file for on-device execution. The export script samples are povided under `example/mediatek/`. For example, the following commnad exports the `.pte` using the scripts provided.
60+
```bash
61+
cd executorch
62+
63+
./examples/mediatek/shell_scripts/export_oss.sh mobilenetv3
64+
```
65+
66+
2. Find the `.pte` files under the directory named as same as the model.
67+
68+
### Runtime:
69+
70+
**Build MediaTek Backend for ExecuTorch Runtime**
71+
1. Navigate to `backends/mediatek/scripts/` directory.
72+
73+
2. **Build MediaTek Backend**: Once the prerequisites are in place, run the `mtk_build.sh` script to start the build process:
74+
```bash
75+
./mtk_build.sh
76+
```
77+
78+
3. MediaTek backend will be built under `cmake-android-out/backends/` as `libneuron_backend.so`.
79+
80+
**Build a runner to execute the model on the device**:
81+
1. Build the runners and the backend by exedcuting the script:
82+
```bash
83+
./mtk_build_examples.sh
84+
```
85+
86+
2. The runners will be built under `cmake-android-out/examples/`
87+
88+
## Deploying and running on a device
89+
90+
1. **Push MediaTek universal SDK and MediaTek backend to the device**: push `libneuronusdk_adapter.mtk.so` and `libneuron_backend.so` to the phone and export it to the `$LD_LIBRARY_PATH` environment variable before executing ExecuTorch with MediaTek backend.
91+
92+
```bash
93+
export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH
94+
```

docs/source/index.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,7 @@ Topics in this section will help you get started with ExecuTorch.
111111
customcarditem entries below.
112112
executorch-arm-delegate-tutorial
113113
build-run-coreml
114+
build-run-mediatek-backend
114115
build-run-mps
115116
build-run-qualcomm-ai-engine-direct-backend
116117
build-run-xtensa
@@ -331,6 +332,13 @@ ExecuTorch tutorials.
331332
:link: build-run-coreml.html
332333
:tags: Export,Backend,Delegation,CoreML
333334

335+
.. customcarditem::
336+
:header: Building and Running ExecuTorch with MediaTek Backend
337+
:card_description: A tutorial that walks you through the process of building ExecuTorch with MediaTek Backend
338+
:image: _static/img/generic-pytorch-logo.png
339+
:link: build-run-mediatek-backend.html
340+
:tags: Export,Backend,Delegation,MediaTek
341+
334342
.. customcarditem::
335343
:header: Building and Running ExecuTorch with MPS Backend
336344
:card_description: A tutorial that walks you through the process of building ExecuTorch with MPSGraph Backend

examples/mediatek/README.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,6 @@ bash shell_scripts/export_oss.sh <model_name>
7272
- `model_name`: deeplabv3/edsr/inceptionv3/inceptionv4/mobilenetv2/mobilenetv3/resnet18/resnet50
7373

7474
# Runtime
75-
## Supported Chips
76-
77-
The examples provided in this repository are tested and supported on the following MediaTek chip:
78-
79-
- MediaTek Dimensity 9300 (D9300)
80-
8175
## Environment Setup
8276

8377
To set up the build environment for the `mtk_executor_runner`:

runtime/executor/method.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -963,8 +963,8 @@ Method::set_output_data_ptr(void* buffer, size_t size, size_t output_idx) {
963963
if (tensor_meta->is_memory_planned()) {
964964
ET_LOG(
965965
Error,
966-
"Output %zu is memory planned, or is a constant. Cannot override \
967-
the existing data pointer.",
966+
"Output %zu is memory planned, or is a constant. Cannot override "
967+
"the existing data pointer.",
968968
output_idx);
969969
return Error::InvalidState;
970970
}

0 commit comments

Comments
 (0)