|
4 | 4 | "cell_type": "markdown",
|
5 | 5 | "metadata": {},
|
6 | 6 | "source": [
|
7 |
| - "# Intel® Neural Compressor (INC) Sample for Tensorflow" |
| 7 | + "# Intel® Neural Compressor Sample for Tensorflow" |
8 | 8 | ]
|
9 | 9 | },
|
10 | 10 | {
|
|
13 | 13 | "source": [
|
14 | 14 | "## Agenda\n",
|
15 | 15 | "- Train a CNN Model Based on Keras\n",
|
16 |
| - "- Quantize Keras Model by INC\n", |
| 16 | + "- Quantize Keras Model by Intel® Neural Compressor\n", |
17 | 17 | "- Compare Quantized Model"
|
18 | 18 | ]
|
19 | 19 | },
|
20 | 20 | {
|
21 | 21 | "cell_type": "markdown",
|
22 | 22 | "metadata": {},
|
23 | 23 | "source": [
|
24 |
| - "### INC Release and Sample \n", |
| 24 | + "### Intel® Neural Compressor Release and Sample \n", |
25 | 25 | "\n",
|
26 |
| - "This sample code is always updated for the INC release in latest oneAPI release.\n", |
| 26 | + "This sample code is always updated for the Intel® Neural Compressor release in latest oneAPI release.\n", |
27 | 27 | "\n",
|
28 | 28 | "If you want to get the sample code for old oneAPI release, please checkout the old sample code release by git tag.\n",
|
29 | 29 | "\n",
|
|
51 | 51 | "source": [
|
52 | 52 | "Import python packages and check version.\n",
|
53 | 53 | "\n",
|
54 |
| - "Make sure the Tensorflow is **2.2** or newer, INC is **not 1.2** and matplotlib are installed.\n", |
| 54 | + "Make sure the Tensorflow is **2.2** or newer, Intel® Neural Compressor is **not 1.2** and matplotlib are installed.\n", |
55 | 55 | "\n",
|
56 |
| - "Note, INC has an old names: **lpot**, **ilit**. Following script supports to old package names." |
| 56 | + "Note, Intel® Neural Compressor has an old names: **lpot**, **ilit**. Following script supports to old package names." |
57 | 57 | ]
|
58 | 58 | },
|
59 | 59 | {
|
|
92 | 92 | "cell_type": "markdown",
|
93 | 93 | "metadata": {},
|
94 | 94 | "source": [
|
95 |
| - "Intel Optimized TensorFlow 2.5.0 and later require to set environment variable **TF_ENABLE_MKL_NATIVE_FORMAT=0** before running INC quantize Fp32 model or deploying the quantized model." |
| 95 | + "Intel Optimized TensorFlow 2.5.0 and later require to set environment variable **TF_ENABLE_MKL_NATIVE_FORMAT=0** before running Intel® Neural Compressor quantize Fp32 model or deploying the quantized model." |
96 | 96 | ]
|
97 | 97 | },
|
98 | 98 | {
|
|
224 | 224 | "cell_type": "markdown",
|
225 | 225 | "metadata": {},
|
226 | 226 | "source": [
|
227 |
| - "## Quantize FP32 Model by INC\n", |
| 227 | + "## Quantize FP32 Model by Intel® Neural Compressor\n", |
228 | 228 | "\n",
|
229 |
| - "INC supports to quantize the model with a validation dataset for tuning.\n", |
| 229 | + "Intel® Neural Compressor supports to quantize the model with a validation dataset for tuning.\n", |
230 | 230 | "Finally, it returns an frezon quantized model based on int8.\n",
|
231 | 231 | "\n",
|
232 |
| - "We prepare a python script \"**inc_quantize_model.py**\" to call INC to finish the all quantization job.\n", |
| 232 | + "We prepare a python script \"**inc_quantize_model.py**\" to call Intel® Neural Compressor to finish the all quantization job.\n", |
233 | 233 | "Following code sample is used to explain the code.\n",
|
234 | 234 | "\n",
|
235 | 235 | "### Define Dataloader\n",
|
|
291 | 291 | "source": [
|
292 | 292 | "### Define Yaml File\n",
|
293 | 293 | "\n",
|
294 |
| - "We define alexnet.yaml to save the necessary parameters for INC.\n", |
| 294 | + "We define alexnet.yaml to save the necessary parameters for Intel® Neural Compressor.\n", |
295 | 295 | "In this case, we only need to change the input/output according to the fp32 model.\n",
|
296 | 296 | "\n",
|
297 | 297 | "In this case, the input node name is '**x**'.\n",
|
|
0 commit comments