Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
ba48deb
update quantization api
changwangss Nov 30, 2022
83825af
Set default value for use_bf16 and fixed random seed setting error (#…
PenghuiCheng Dec 1, 2022
df8c5f4
Turn off ITEX optimization pass (#196)
lvliang-intel Dec 2, 2022
fb5560e
update example json
changwangss Dec 2, 2022
694f22b
Fixed UT error for bf16 op list for QAT mode (#200)
PenghuiCheng Dec 2, 2022
5d22e01
Disable multi instance for ITEX GPU benchmark (#204)
lvliang-intel Dec 2, 2022
0346c53
Revert "remove op-wise cfgs for testing. (#1521)" (#202)
intel-zhangyi Dec 2, 2022
01899d6
add examples for GPTJ (#162)
changwangss Dec 2, 2022
b48ff81
Neural Coder mod launcher arg: "strategy" to "approach" (#201)
kaikaiyao Dec 3, 2022
5855117
update publication_list.md (#212)
chensuyue Dec 5, 2022
ebe9e2a
Added distributed training support for distillation of CNN-2. (#208)
XinyuYe-Intel Dec 5, 2022
d33ebe6
Added distributed training support for distillation of MobileNetV2. …
XinyuYe-Intel Dec 5, 2022
08fe8dd
fix load issue (#194)
changwangss Dec 5, 2022
e9be412
Fix NTM-One-Shot failed with KeyError (#210)
lvliang-intel Dec 5, 2022
7fb76c4
Fix TextRNN and centernet_hg104 tuning issue (#171)
lvliang-intel Dec 5, 2022
7e78a95
Neural Coder enable backend support for intel_extension_for_transform…
kaikaiyao Dec 5, 2022
895300a
Neural Coder launcher debug (#215)
kaikaiyao Dec 5, 2022
d28bd14
Neural Coder enable support of TensorFlow/Keras models to quantize (p…
kaikaiyao Dec 5, 2022
0914e98
Fix Seq2seqAttn model tuning issue (#172)
lvliang-intel Dec 5, 2022
4d16046
[Strategy] Adjust the hints for import hyperopt and sigopt (#213)
yiliu30 Dec 5, 2022
0a88e6c
Support QuantizedConv + BiasAdd + Activation + Dequantize fusion (#207)
lvliang-intel Dec 5, 2022
b16fb1a
Neural Coder debug for HuggingFace run_qa.py (#220)
Lfish99 Dec 6, 2022
09c64db
modify model test condition (#209)
XuehaoSun Dec 6, 2022
cbd5318
Jianyuzh add vgg19 tfhub example (#183)
NeoZhangJianyu Dec 6, 2022
664943c
New Export with API 2.0 (#195)
xin3he Dec 6, 2022
dd39ca0
Neural Coder enable VS Code extension (#205)
WenjiaoYue Dec 6, 2022
98d3c83
Support 'Square', 'Sum', 'SparseSegmentSqrtN' BF16 ops in TensorFlow …
lvliang-intel Dec 7, 2022
c0fa310
Support Conv2D + BiasAdd + Relu + Sum fusion (#221)
lvliang-intel Dec 7, 2022
59c4248
update azure pipeline (#229)
chensuyue Dec 7, 2022
3a5afba
Add export examples for new API (#225)
xin3he Dec 7, 2022
a435389
Added optimization level for new API & support sub-parameter setting …
yiliu30 Dec 7, 2022
4612e08
Add optimization level(conservative tuning) for tuning strategy (#227)
yiliu30 Dec 7, 2022
1df80c1
Neural Coder update documentation for INC2.0 (#232)
kaikaiyao Dec 7, 2022
60cfec6
Enable BigDL Nano API in Neural Coder (#238)
yuwei-work Dec 8, 2022
1396241
Neural Coder release note for INC 2.0 (#240)
kaikaiyao Dec 8, 2022
8a043b2
fix some typos in rls note readme (#241)
kaikaiyao Dec 8, 2022
b3281d9
Fix ITEX OOB model benchmark issue (#242)
lvliang-intel Dec 8, 2022
c39e194
Bump certifi version (#245)
dependabot[bot] Dec 9, 2022
2571e3c
Updated multinodes distillation results of MobileNetV2-0.35 and CNN-2…
XinyuYe-Intel Dec 9, 2022
e1ae206
Enhance ITEX benchmark UT check (#231)
lvliang-intel Dec 9, 2022
47b51ed
fix models readme (#248)
ronggegu Dec 9, 2022
2483a84
Support different backends for adaptors (#157)
mengniwang95 Dec 9, 2022
583545b
Fix Windows multi-instance bug (#216)
Spycsh Dec 9, 2022
7ffd5e5
fixed the ut for sigopt (#249)
yiliu30 Dec 9, 2022
80311f6
Add mse_v2 tuning strategy (#218)
intel-zhangyi Dec 9, 2022
70b7b9d
update spell check dict (#247)
ronggegu Dec 9, 2022
c61be34
azure ut coverage report fix (#252)
XuehaoSun Dec 9, 2022
1deb7d2
Refactor Quantization Aware Training of TF backend (#250)
zehao-intel Dec 9, 2022
8b652cd
update publications (#255)
chensuyue Dec 9, 2022
83018ef
Add hawq_v2 tuning strategy (#230)
BiaoFangAIA Dec 9, 2022
a230726
Fixed pruning and distillation bug and remove invalid code (#251)
PenghuiCheng Dec 9, 2022
4fa7531
add keras-in/keras-out to INC (#243)
ClarkChin08 Dec 10, 2022
c53e403
add warning when meets unsupported config (#236)
xin3he Dec 11, 2022
30803cf
Remove data, metric and common to neural_compressor (#244)
changwangss Dec 11, 2022
a18ff5c
CI UT enhance (#258)
chensuyue Dec 12, 2022
fcbbcc7
Neural Coder enable launcher bench (#260)
kaikaiyao Dec 12, 2022
d6f417b
Pruning/Integrate new-developed pruning API with old one (#257)
wenhuach21 Dec 12, 2022
40ab5a3
Enable Transformer LT search space for Dynamic Neural Architecture Se…
macsz Dec 12, 2022
e996a93
Export Qlinear to QDQ (#224)
mengniwang95 Dec 12, 2022
cde72c8
Update spr-base version number (#259)
lvliang-intel Dec 13, 2022
d6f9192
add pruning examples and docs (#262)
WeiweiZhang1 Dec 13, 2022
ae3cf56
Fixed calibration sampling size error and IPEX examples error (#264)
PenghuiCheng Dec 13, 2022
f21e4a3
Enhancement benchmark with dataloader (#269)
PenghuiCheng Dec 14, 2022
7d1e1f9
Fix TF QAT UT issues (#266)
zehao-intel Dec 15, 2022
764357b
Add recipe for TRT EP (#278)
mengniwang95 Dec 15, 2022
0878bea
Refine Keras Examples for INC New API (#219)
zehao-intel Dec 15, 2022
e606b02
update quantization api
changwangss Nov 30, 2022
6c605c1
update example json
changwangss Dec 2, 2022
ecc6e8b
add benchmark and fx+static for torchaudio
changwangss Dec 15, 2022
2d06c22
fix conflict
changwangss Dec 15, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
5 changes: 3 additions & 2 deletions .azure-pipelines/model-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ parameters:
- ssd_mobilenet_v1_ckpt
# - ssd_resnet50_v1_ckpt
- inception_v1
- resnet50_fashion
# - resnet50_fashion
- darknet19
- densenet-121
- resnet-101
Expand Down Expand Up @@ -156,7 +156,8 @@ stages:
cd ${OUT_SCRIPT_PATH}
mkdir generated
mkdir last_generated
python -u collect_log_all.py --logs_dir $(OUT_SCRIPT_PATH) --output_dir generated
pip install requests
python -u collect_log_all.py --logs_dir $(OUT_SCRIPT_PATH) --output_dir generated --build_id=$(Build.BuildId)
displayName: "Collect all logs"
- task: DownloadPipelineArtifact@2
continueOnError: true
Expand Down
4 changes: 2 additions & 2 deletions .azure-pipelines/scripts/codeScan/pylint/pylint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ pip install -r /neural-compressor/requirements.txt
pip install torch==1.12.0

python -m pylint -f json --disable=R,C,W,E1129 --enable=line-too-long --max-line-length=120 --extension-pkg-whitelist=numpy --ignored-classes=TensorProto,NodeProto \
--ignored-modules=tensorflow,torch,torch.quantization,torch.tensor,torchvision,mxnet,onnx,onnxruntime,intel_extension_for_pytorch /neural-compressor/neural_compressor \
--ignored-modules=tensorflow,torch,torch.quantization,torch.tensor,torchvision,fairseq,mxnet,onnx,onnxruntime,intel_extension_for_pytorch /neural-compressor/neural_compressor \
> $log_dir/pylint.json

exit_code=$?

$BOLD_YELLOW && echo " ----------------- Current pylint cmd start --------------------------" && $RESET
echo "python -m pylint -f json --disable=R,C,W,E1129 --enable=line-too-long --max-line-length=120 --extension-pkg-whitelist=numpy --ignored-classes=TensorProto,NodeProto --ignored-modules=tensorflow,torch,torch.quantization,torch.tensor,torchvision,mxnet,onnx,onnxruntime,intel_extension_for_pytorch /neural-compressor/neural_compressor > $log_dir/pylint.json"
echo "python -m pylint -f json --disable=R,C,W,E1129 --enable=line-too-long --max-line-length=120 --extension-pkg-whitelist=numpy --ignored-classes=TensorProto,NodeProto --ignored-modules=tensorflow,torch,torch.quantization,torch.tensor,torchvision,fairseq,mxnet,onnx,onnxruntime,intel_extension_for_pytorch /neural-compressor/neural_compressor > $log_dir/pylint.json"
$BOLD_YELLOW && echo " ----------------- Current pylint cmd end --------------------------" && $RESET

$BOLD_YELLOW && echo " ----------------- Current log file output start --------------------------" && $RESET
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ amazonlinux
Amodei
AmpConf
AMX
amx
analytics
Analytics
Anastasiia
Expand Down Expand Up @@ -149,6 +150,7 @@ berts
bertsquad
BertTokenizer
bfloat
blockwise
BFP
BGR
Bianchi
Expand Down Expand Up @@ -326,6 +328,7 @@ convolutional
Convolutional
ConvPerStage
ConvReLU
cooldown
copt
coreml
CoreML
Expand Down Expand Up @@ -546,6 +549,7 @@ ensp
entrypoint
enum
env
environ
eq
erf
Erf
Expand Down Expand Up @@ -696,6 +700,7 @@ Goyal
gpg
GPG
gpt
GPTJ
gpu
gpus
GPUs
Expand Down Expand Up @@ -738,6 +743,7 @@ horovodrun
hostfile
Hounsfield
howpublished
hyp
HqEgzS
href
html
Expand Down Expand Up @@ -787,6 +793,7 @@ IML
impl
ImportError
IMS
ibean
inceptionresnetv
InceptionResNetV
inceptionv
Expand Down Expand Up @@ -831,6 +838,7 @@ ipc
ipex
IPEX
ipynb
ipynbrun
ipython
ir
irv
Expand All @@ -843,6 +851,7 @@ IssueQueryThreads
iter
IteratorGetNext
iters
intrinsics
Jäger
jemalloc
Jens
Expand Down Expand Up @@ -1173,6 +1182,7 @@ ngatang
NGPUS
ngram
NHWC
ni
NIC
nifti
niftis
Expand Down Expand Up @@ -1234,8 +1244,11 @@ nvidia
NVIDIA
NVIDIA's
nvme
nw
Nx
NxM
nyu
oc
ok
ol
Omer
Expand All @@ -1245,6 +1258,7 @@ oneapi
oneAPI
onednn
oneDNN
oneshot
onlinedocs
onnx
ONNX
Expand Down Expand Up @@ -1783,6 +1797,7 @@ TestSettings
tf
TF
TFBertForSequenceClassification
tfhub
tflite
tfp
tfrecord
Expand Down Expand Up @@ -1878,6 +1893,7 @@ UI
UID
uint
uk
ultralytics
un
uncomment
uncompress
Expand All @@ -1888,6 +1904,7 @@ unidecode
uniq
unittest
unref
unscale
unsqueeze
unstack
upenn
Expand Down Expand Up @@ -2114,6 +2131,7 @@ tensorrt
hardwares
BenchmarkConf
PruningConf
Pruning's
DistillationConf
grey
ModelZoo
Expand Down Expand Up @@ -2379,3 +2397,66 @@ grappler
amsgrad
qoperator
apis
CPz
PostTrainingQuantConfig
dgpu
Nsh
UmK
fe
vmware
keepachangelog
vscode
IntelNeuralCompressor
SettingsPython
VSCode
argparse
autoEnabling
clickAuto
clickEnable
clickSetting
connectSSH
enableHistory
historyDetail
itemName
leftIcon
outPut
settingPath
topRight
visualstudio
amodio
dbaeumer
dropdown
eslint
registerCommand
tsl
viewlet
PythonLauncher
BigDL
BigDLNanoSupport
Nano
bigdl
inferenceoptimizer
nano
SageMaker
bb
beba
ccdb
ceba
deeb
ebbce
efe
npmjs
AWSSageMakerSupport
sagemaker
xpu
dgpu
BenchmarkConfig
QuantizationAwareTrainingConfig
Startup
doesn
startup
Ajanthan
WeightPruningConfig
Namhoon
Thalaiyasingam
Torr
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ matrix:
d: en_US.ISO8859-15
dictionary:
wordlists:
- ${DICT_DIR}/lpot_dict.txt
output: ${DICT_DIR}/lpot_dict.dic
- ${DICT_DIR}/inc_dict.txt
output: ${DICT_DIR}/inc_dict.dic
sources:
- ${REPO_DIR}/docs/source/*.md
- ${REPO_DIR}/*.md
Expand Down
41 changes: 39 additions & 2 deletions .azure-pipelines/scripts/models/collect_log_all.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import argparse
import os
import requests

parser = argparse.ArgumentParser(allow_abbrev=False)
parser.add_argument("--logs_dir", type=str, default=".")
parser.add_argument("--output_dir", type=str, default=".")
parser.add_argument("--build_id", type=str, default="0")
args = parser.parse_args()
print(args)

Expand All @@ -12,20 +14,21 @@ def main():
file_dir = args.logs_dir
summary_content = ['OS;Platform;Framework;Version;Precision;Model;Mode;Type;BS;Value;Url\n']
tuning_info_content = ['OS;Platform;Framework;Version;Model;Strategy;Tune_time\n']
url_dict = parse_download_url()
# get full path of all files
for root, dirs, files in os.walk(file_dir):
for name in files:
file_name = os.path.join(root, name)
print(file_name)
if '_summary.log' in name:
for line in open(file_name, "r"):
# print(line)
if 'linux' in line:
line = line.replace("<url>", parse_summary_log(line, url_dict))
summary_content.append(line)
if '_tuning_info.log' in name:
for line in open(file_name, "r"):
# print(line)
if 'linux' in line:
line = line.replace("<url>", parse_tuning_log(line, url_dict))
tuning_info_content.append(line)
f = open(args.output_dir + '/summary.log', "a")
for summary in summary_content:
Expand All @@ -35,5 +38,39 @@ def main():
f2.writelines(str(tuning_info))


def parse_tuning_log(line, url_dict):
"""Parsing {Framework}-{Model}-tune.log to get tuning result"""
result = line.split(";")
OS, Platform, Framework, Version, Model, Strategy, Tune_time, Tuning_trials, URL, __ = result
file_name = f"{Framework}-{Model}-tune.log"
download_url = url_dict.get(f"{Framework}_{Model}")
download_url = f"{download_url}{file_name}"
return download_url


def parse_summary_log(line, url_dict):
"""Parse {Framework}-{Model}-tune.log to get benchmarking accuracy result"""
result = line.split(";")
OS, Platform, Framework, Version, Precision, Model, Mode, Type, BS, Value, Url = result
file_name = f"{Framework}-{Model}-tune.log"
download_url = url_dict.get(f"{Framework}_{Model}")
download_url = f"{download_url}{file_name}"
return download_url


def parse_download_url():
"""Get azure artifact information"""
azure_artifact_api_url = f'https://dev.azure.com/lpot-inc/neural-compressor/_apis/build/builds/{args.build_id}/artifacts?api-version=5.1'
azure_artifacts_data = dict(requests.get(azure_artifact_api_url).json().items())
artifact_count = azure_artifacts_data.get("count")
artifact_value = azure_artifacts_data.get("value")
url_dict = {}
for item in artifact_value:
artifact_download_url = item.get("resource").get("downloadUrl")
artifact_download_url = f"{artifact_download_url[:-3]}file&subPath=%2F"
url_dict[item.get("name")] = artifact_download_url
return url_dict


if __name__ == '__main__':
main()
6 changes: 3 additions & 3 deletions .azure-pipelines/scripts/models/collect_log_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,9 @@ def collect_log():
parse_tuning_line(line, tmp)
print(tmp)

results.append('{};{};{};{};FP32;{};Inference;Accuracy;1;{};{}\n'.format(OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['fp32_acc'], URL))
results.append('{};{};{};{};INT8;{};Inference;Accuracy;1;{};{}\n'.format(OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['int8_acc'], URL))
tuning_infos.append(';'.join([OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['strategy'], str(tmp['tune_time']), str(tmp['tuning_trials']), URL, f"{round(tmp['max_mem_size'] / tmp['total_mem_size'] * 100, 4)}%"])+'\n')
results.append('{};{};{};{};FP32;{};Inference;Accuracy;1;{};{}\n'.format(OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['fp32_acc'], "<url>"))
results.append('{};{};{};{};INT8;{};Inference;Accuracy;1;{};{}\n'.format(OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['int8_acc'], "<url>"))
tuning_infos.append(';'.join([OS, PLATFORM, args.framework, args.fwk_ver, args.model, tmp['strategy'], str(tmp['tune_time']), str(tmp['tuning_trials']), "<url>", f"{round(tmp['max_mem_size'] / tmp['total_mem_size'] * 100, 4)}%"])+'\n')
# get model benchmark results
for precision in ['int8', 'fp32']:
throughput = 0.0
Expand Down
17 changes: 11 additions & 6 deletions .azure-pipelines/scripts/models/generate_report.sh
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ function generate_html_core {
printf("<td style=\"background-color:#90EE90\">%.2f</td>", target);
}else if(target < 1) {
printf("<td style=\"background-color:#FFD2D2\">%.2f</td>", target);
job_status = "fail"
perf_status = "fail"
}else{
printf("<td>%.2f</td>", target);
}
Expand Down Expand Up @@ -233,11 +233,11 @@ function generate_html_core {
printf("<td style=\"%s\" colspan=2>%.2f %</td>", status_png, target*100);
} else {
target = new_result / previous_result;
if(target <= 1.104 && target >= 0.895) {
if(target <= 1.054 && target >= 0.945) {
status_png = "background-color:#90EE90";
} else {
status_png = "background-color:#FFD2D2";
job_status = "fail"
perf_status = "fail"
}
printf("<td style=\"%s\" colspan=2>%.2f</td>", status_png, target);
}
Expand Down Expand Up @@ -265,15 +265,15 @@ function generate_html_core {
status_png = "background-color:#90EE90";
} else {
status_png = "background-color:#FFD2D2";
job_status = "fail"
ratio_status = "fail"
}
printf("<td style=\"%s\">%.2f</td>", status_png, target);
} else {
if (new_result == nan && previous_result == nan) {
printf("<td class=\"col-cell col-cell3\"></td>");
} else {
if (new_result == nan) {
job_status = "fail"
ratio_status = "fail"
status_png = "background-color:#FFD2D2";
printf("<td style=\"%s\"></td>", status_png);
} else {
Expand All @@ -285,6 +285,8 @@ function generate_html_core {

BEGIN {
job_status = "pass"
perf_status = "pass"
ratio_status = "pass"
// issue list
jira_mobilenet = "https://jira01.devtools.intel.com/browse/PADDLEQ-384";
jira_resnext = "https://jira01.devtools.intel.com/browse/PADDLEQ-387";
Expand Down Expand Up @@ -378,8 +380,11 @@ function generate_html_core {

printf("</tr>\n");

status = (perf_status == "fail" && ratio_status == "fail") ? "fail" : "pass"
status = (job_status == "fail") ? "fail" : status

} END{
printf("\n%s", job_status);
printf("\n%s", status);
}
' >> ${output_dir}/report.html
job_state=$(tail -1 ${WORKSPACE}/report.html)
Expand Down
Loading