Skip to content

Commit 7ce9e8c

Browse files
committed
merge
Signed-off-by: zehao-intel <[email protected]>
2 parents af5d8ff + 93a5017 commit 7ce9e8c

File tree

3 files changed

+30
-2
lines changed
  • .azure-pipelines/scripts/codeScan/pyspelling
  • examples/tensorflow/nlp

3 files changed

+30
-2
lines changed

.azure-pipelines/scripts/codeScan/pyspelling/inc_dict.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2397,15 +2397,25 @@ grappler
23972397
amsgrad
23982398
qoperator
23992399
apis
2400+
<<<<<<< HEAD:.azure-pipelines/scripts/codeScan/pyspelling/inc_dict.txt
24002401
PostTrainingQuantConfig
24012402
dgpu
24022403
CPz
24032404
PostTrainingQuantConfig
24042405
dgpu
2406+
=======
2407+
AccuracyCriterion
2408+
TuningCriterion
2409+
CPz
2410+
>>>>>>> 93a5017ddc484b28ed0169574a7127b184385b34:.azure-pipelines/scripts/codeScan/pyspelling/lpot_dict.txt
24052411
Nsh
24062412
UmK
24072413
fe
24082414
vmware
2415+
<<<<<<< HEAD:.azure-pipelines/scripts/codeScan/pyspelling/inc_dict.txt
2416+
=======
2417+
PythonLauncher
2418+
>>>>>>> 93a5017ddc484b28ed0169574a7127b184385b34:.azure-pipelines/scripts/codeScan/pyspelling/lpot_dict.txt
24092419
keepachangelog
24102420
vscode
24112421
IntelNeuralCompressor

examples/tensorflow/nlp/bert_base_mrpc/quantization/ptq/README.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,12 @@ This is a tutorial of how to enable bert model with Intel® Neural Compressor.
8787
2. User specifies fp32 *model*, calibration dataset *q_dataloader* and a custom *eval_func* which encapsulates the evaluation dataset and metric by itself.
8888

8989
For bert, we applied the first one as we already have write dataset and metric for bert mrpc task.
90+
<<<<<<< HEAD
91+
92+
### Quantization Config
93+
The Quantization Config class has default parameters setting for running on Intel CPUs. If running this example on Intel GPUs, the 'backend' parameter should be set to 'itex' and the 'device' parameter should be set to 'gpu'.
94+
=======
95+
>>>>>>> 93a5017ddc484b28ed0169574a7127b184385b34
9096
9197
### Quantization Config
9298
The Quantization Config class has default parameters setting for running on Intel CPUs. If running this example on Intel GPUs, the 'backend' parameter should be set to 'itex' and the 'device' parameter should be set to 'gpu'.
@@ -100,6 +106,17 @@ config = PostTrainingQuantConfig(
100106
...
101107
)
102108
```
109+
<<<<<<< HEAD
110+
config = PostTrainingQuantConfig(
111+
device="gpu",
112+
backend="itex",
113+
inputs=["input_file", "batch_size"],
114+
outputs=["loss/Softmax:0", "IteratorGetNext:3"],
115+
...
116+
)
117+
```
118+
=======
119+
>>>>>>> 93a5017ddc484b28ed0169574a7127b184385b34
103120
Here we set the input tensor and output tensors name into *inputs* and *outputs* args. In this case we calibrate and quantize the model, and use our calibration dataloader initialized from a 'Dataset' object.
104121
105122
### Code update
@@ -135,5 +152,5 @@ After prepare step is done, we add tune and benchmark code to generate quantized
135152
accuracy = evaluate(model.graph_def)
136153
print('Batch size = %d' % FLAGS.eval_batch_size)
137154
print("Accuracy: %.5f" % accuracy)
138-
```
155+
139156
The Intel® Neural Compressor quantization.fit() function will return a best quantized model under time constraint.

examples/tensorflow/nlp/distilbert_base/quantization/ptq/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,8 @@ After prepare step is done, we add the code for quantization tuning to generate
170170
from neural_compressor.benchmark import fit
171171
from neural_compressor.config import BenchmarkConfig
172172
if ARGS.mode == 'performance':
173-
fit(model, conf=BenchmarkConfig(), b_func=eval_func)
173+
conf = BenchmarkConfig(cores_per_instance=28, num_of_instance=1)
174+
fit(graph, conf, b_func=self.eval_func)
174175
elif ARGS.mode == 'accuracy':
175176
self.eval_func(graph)
176177
```

0 commit comments

Comments
 (0)