Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions neural_coder/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ def parse_args():
parser.add_argument("--opt", type=str, default="",
help="optimization feature to enable")

parser.add_argument("--strategy", type=str, default="static",
help="quantization strategy")
parser.add_argument("--approach", type=str, default="static",
help="quantization approach (strategy)")

parser.add_argument('--config', type=str, default="",
help='quantization configuration file path')
Expand All @@ -53,11 +53,11 @@ def parse_args():
# optimize on copied script with Neural Coder
from neural_coder import enable
if args.opt == "":
if args.strategy == "static":
if args.approach == "static":
features=["pytorch_inc_static_quant_fx"]
if args.strategy == "static_ipex":
if args.approach == "static_ipex":
features=["pytorch_inc_static_quant_ipex"]
if args.strategy == "dynamic":
if args.approach == "dynamic":
features=["pytorch_inc_dynamic_quant"]
else:
features=[args.opt]
Expand Down
4 changes: 2 additions & 2 deletions neural_coder/docs/PythonLauncher.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Note: Any modification on the optimized code ```run_glue_optimized.py``` will be

Users can specify which Deep Learning optimization they want to conduct using ```--opt``` argument. The list of supported Deep Learning optimization features can be found [here](SupportMatrix.md).

Note that if specifically optimizing with INT8 quantization by Intel® Neural Compressor, ```--strategy``` argument can be specified with either ```static```, ```static_ipex``` or ```dynamic```. For example, to run INT8 dynamic quantization by Intel® Neural Compressor instead of the default static quantization:
Note that if specifically optimizing with INT8 quantization by Intel® Neural Compressor, to choose a quantization approach (strategy), ```--approach``` argument can be specified with either ```static```, ```static_ipex``` or ```dynamic```. For example, to run INT8 dynamic quantization by Intel® Neural Compressor instead of the default static quantization:
```bash
python -m neural_coder --strategy dynamic run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_eval --output_dir result
python -m neural_coder --approach dynamic run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_eval --output_dir result
```