Skip to content

Commit 9078a6e

Browse files
fix sample.json error and use full name of INC
1 parent fc4d69c commit 9078a6e

File tree

3 files changed

+13
-13
lines changed

3 files changed

+13
-13
lines changed

AI-and-Analytics/Getting-Started-Samples/INC-Sample-for-Tensorflow/README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
# `Intel® Neural Compressor (Intel® INC)` Sample for TensorFlow*
1+
# `Intel® Neural Compressor` Sample for TensorFlow*
22

33
## Background
44
Low-precision inference can speed up inference obviously, by converting the fp32 model to int8 or bf16 model. Intel provides Intel® Deep Learning Boost technology in the Second Generation Intel® Xeon® Scalable Processors and newer Xeon®, which supports to speed up int8 and bf16 model by hardware.
55

6-
Intel® Low Precision Optimization Tool (Intel INC) helps the user to simplify the processing to convert the fp32 model to int8/bf16.
6+
Intel® Neural Compressor helps the user to simplify the processing to convert the fp32 model to int8/bf16.
77

8-
At the same time, Intel INC will tune the quanization method to reduce the accuracy loss, which is a big blocker for low-precision inference.
8+
At the same time, Intel® Neural Compressor will tune the quanization method to reduce the accuracy loss, which is a big blocker for low-precision inference.
99

10-
Intel INC is released in Intel® AI Analytics Toolkit and works with Intel® Optimization of TensorFlow*.
10+
Intel® Neural Compressor is released in Intel® AI Analytics Toolkit and works with Intel® Optimization of TensorFlow*.
1111

12-
Please refer to the official website for detailed info and news: [https://github.com/intel/lp-opt-tool](https://github.com/intel/lp-opt-tool)
12+
Please refer to the official website for detailed info and news: [https://github.com/intel/neural-compressor](https://github.com/intel/neural-compressor)
1313

1414
## License
1515

@@ -19,18 +19,18 @@ Code samples are licensed under the MIT license. See
1919
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
2020

2121
## Purpose
22-
This sample will show a whole process to build up a CNN model to recognize handwriting number and speed up it by Intel INC.
22+
This sample will show a whole process to build up a CNN model to recognize handwriting number and speed up it by Intel® Neural Compressor.
2323

24-
We will learn how to train a CNN model based on Keras with TensorFlow, use Intel INC to quantize the model and compare the performance to understand the benefit of Intel INC.
24+
We will learn how to train a CNN model based on Keras with TensorFlow, use Intel® Neural Compressor to quantize the model and compare the performance to understand the benefit of Intel® Neural Compressor.
2525

2626
## Key Implementation Details
2727

2828
- Use Keras on TensorFlow to build and train the CNN model.
2929

3030

31-
- Define function and class for Intel INC to quantize the CNN model.
31+
- Define function and class for Intel® Neural Compressor to quantize the CNN model.
3232

33-
The Intel INC can run on any Intel® CPU to quantize the AI model.
33+
The Intel® Neural Compressor can run on any Intel® CPU to quantize the AI model.
3434

3535
The quantized AI model has better inference performance than the FP32 model on Intel CPU.
3636

@@ -47,7 +47,7 @@ We will learn how to train a CNN model based on Keras with TensorFlow, use Intel
4747
| OS | Linux* Ubuntu* 18.04
4848
| Hardware | The Second Generation Intel® Xeon® Scalable processor family or newer
4949
| Software | Intel® oneAPI AI Analytics Toolkit 2021.1 or newer
50-
| What you will learn | How to use Intel INC tool to quantize the AI model based on TensorFlow and speed up the inference on Intel® Xeon® CPU
50+
| What you will learn | How to use Intel® Neural Compressor tool to quantize the AI model based on TensorFlow and speed up the inference on Intel® Xeon® CPU
5151
| Time to complete | 10 minutes
5252

5353
## Running Environment
@@ -127,7 +127,7 @@ conda activate tensorflow
127127
128128
```
129129

130-
### Install Intel INC by Local Channel
130+
### Install Intel® Neural Compressor by Local Channel
131131

132132
```
133133
conda install -c ${ONEAPI_ROOT}/conda_channel neural-compressor -y --offline

AI-and-Analytics/Getting-Started-Samples/INC-Sample-for-Tensorflow/sample.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"env": ["source ${ONEAPI_ROOT}/setvars.sh --force",
1515
"conda env remove -n user_tensorflow",
1616
"conda create -n user_tensorflow -c ${ONEAPI_ROOT}/conda_channel python=`python -V| awk '{print $2}'` -y",
17-
"conda activate user_tensorfinclow",
17+
"conda activate user_tensorflow",
1818
"conda install -n user_tensorflow -c ${ONEAPI_ROOT}/conda_channel tensorflow python-flatbuffers -y",
1919
"conda install -n user_tensorflow -c ${ONEAPI_ROOT}/conda_channel neural-compressor -y --offline",
2020
"conda install -n user_tensorflow -c ${ONEAPI_ROOT}/conda_channel lpot -y --offline",

AI-and-Analytics/Getting-Started-Samples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
1717
| Compoment | Folder | Description
1818
| --------- | ------------------------------------------------ | -
1919
| daal4py | [IntelPython_daal4py_GettingStarted](IntelPython_daal4py_GettingStarted) | Batch linear regression using the python API package daal4py from oneAPI Data Analytics Library (oneDAL) .
20-
| INC | [INC-Sample-for-Tensorflow](INC-Sample-for-Tensorflow) |Quantize a fp32 model into int8 by Intel® Neural Compressor (INC), and compare the performance between fp32 and int8 .
20+
| Intel® Neural Compressor | [INC-Sample-for-Tensorflow](INC-Sample-for-Tensorflow) |Quantize a fp32 model into int8 by Intel® Neural Compressor, and compare the performance between fp32 and int8 .
2121
| Modin | [IntelModin_GettingStarted](IntelModin_GettingStarted) | Run Modin-accelerated Pandas functions and note the performance gain .
2222
| PyTorch | [IntelPyTorch_GettingStarted](IntelPyTorch_GettingStarted) | A simple training example for PyTorch.
2323
| TensorFlow | [IntelTensorFlow_GettingStarted](IntelTensorFlow_GettingStarted) | A simple training example for TensorFlow.

0 commit comments

Comments
 (0)