Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
137 commits
Select commit Hold shift + click to select a range
14f42a8
Feat(ST): add a interface for hawq(stage1)
yiliu30 Nov 8, 2022
e0ff732
hawq_metric.py
Nov 10, 2022
e81744e
pytorch.py
Nov 10, 2022
466ffb8
disable line 33
Nov 10, 2022
3fb9a23
add wenhuach test env
Nov 15, 2022
59bd29b
try to test mes strategy, have bug now
Nov 15, 2022
accec3c
change name
Nov 15, 2022
769cbc2
comment test
Nov 15, 2022
a9fecbb
add activation quantized loss eval
BiaoFangAIA Nov 15, 2022
8f9e355
fixed seed for dummy datasets
BiaoFangAIA Nov 15, 2022
11c7592
for independence hawq tuning strategic
BiaoFangAIA Nov 15, 2022
bf44c0e
add a fallback ut
yiliu30 Nov 15, 2022
eff5065
update test file
Nov 16, 2022
ed6a1fc
tiny update
Nov 17, 2022
883c3a4
weight hessian trace, not finished
Nov 17, 2022
a50cc14
bascially finished weight trace
Nov 18, 2022
2528605
enable activation gradient hook, activation trace is not finished
Nov 18, 2022
abbc4ae
reformat code
Nov 18, 2022
58128ec
fix a bug
Nov 18, 2022
26538ee
when reset the required grad, something goes wrong
Nov 21, 2022
8710a69
add trick imagenet dataset
Nov 21, 2022
cda3029
fix fuese issue
Nov 21, 2022
df3c6e0
change to eval model, remove bias
Nov 21, 2022
084b4de
fixed weight to op bug
Nov 21, 2022
4f0961d
still have issues
Nov 21, 2022
16bd68e
WA for align the op name
yiliu30 Nov 22, 2022
e0ae1ce
change entry point to main function
Nov 22, 2022
3440ac5
get activations and the corresponding gradients
Nov 23, 2022
f895fb4
change fusefx position
Nov 23, 2022
4f7dd78
add weight quant loss, the current key is from quant model
Nov 23, 2022
a7b58c7
add weights_quant loss eval
BiaoFangAIA Nov 23, 2022
356dc2b
fixed weight trace issue
Nov 24, 2022
5f78a9c
fixed weight trace issue
Nov 24, 2022
df25db9
act traces have some issues
Nov 24, 2022
5a266ff
correct the qnt_weigths does't machted issue
BiaoFangAIA Nov 24, 2022
523303f
support activation traces
Nov 24, 2022
aae14ee
Merge branch 'z_wenhuach/qaunt_scoring' of https://github.com/intel-i…
Nov 24, 2022
f56ab18
only enable weight traces currently
Nov 24, 2022
007b336
merge weights quantization loss and trace
BiaoFangAIA Nov 25, 2022
420fc95
Update conf.yaml
BiaoFangAIA Nov 28, 2022
2707825
WA add loss for strategy
yiliu30 Nov 28, 2022
36731bc
Feat(ST): add a interface for hawq(stage1)
yiliu30 Nov 8, 2022
c02b5c1
hawq_metric.py
Nov 10, 2022
399c732
pytorch.py
Nov 10, 2022
3b5abbf
resolve conflicts
yiliu30 Nov 30, 2022
c7c1698
add wenhuach test env
Nov 15, 2022
581b21e
try to test mes strategy, have bug now
Nov 15, 2022
7bb75cc
change name
Nov 15, 2022
312b8aa
comment test
Nov 15, 2022
90ef088
add activation quantized loss eval
BiaoFangAIA Nov 15, 2022
84fe882
fixed seed for dummy datasets
BiaoFangAIA Nov 15, 2022
f221068
for independence hawq tuning strategic
BiaoFangAIA Nov 15, 2022
c6ebf79
add a fallback ut
yiliu30 Nov 15, 2022
69f6c2a
update test file
Nov 16, 2022
85f1d20
tiny update
Nov 17, 2022
a490187
weight hessian trace, not finished
Nov 17, 2022
6c683f4
bascially finished weight trace
Nov 18, 2022
03993e6
enable activation gradient hook, activation trace is not finished
Nov 18, 2022
20bed96
reformat code
Nov 18, 2022
806290a
fix a bug
Nov 18, 2022
4efc18c
when reset the required grad, something goes wrong
Nov 21, 2022
62dddf7
add trick imagenet dataset
Nov 21, 2022
755c38c
resolve conflicts
yiliu30 Nov 30, 2022
87793cf
change to eval model, remove bias
Nov 21, 2022
7a7520b
fixed weight to op bug
Nov 21, 2022
6cc95b0
still have issues
Nov 21, 2022
72a2385
WA for align the op name
yiliu30 Nov 22, 2022
71a4832
change entry point to main function
Nov 22, 2022
d9378c1
get activations and the corresponding gradients
Nov 23, 2022
17d381f
change fusefx position
Nov 23, 2022
d0a3fc7
add weight quant loss, the current key is from quant model
Nov 23, 2022
c466539
add weights_quant loss eval
BiaoFangAIA Nov 23, 2022
c4c00ca
fixed weight trace issue
Nov 24, 2022
85fac87
fixed weight trace issue
Nov 24, 2022
dc28247
act traces have some issues
Nov 24, 2022
deb413e
support activation traces
Nov 24, 2022
7c508d5
correct the qnt_weigths does't machted issue
BiaoFangAIA Nov 24, 2022
2520925
only enable weight traces currently
Nov 24, 2022
1530c94
merge weights quantization loss and trace
BiaoFangAIA Nov 25, 2022
6edf385
Update conf.yaml
BiaoFangAIA Nov 28, 2022
80299f5
WA add loss for strategy
yiliu30 Nov 28, 2022
4b96aa5
WA for hawq strategy loss
yiliu30 Nov 30, 2022
26061f2
change to default path
BiaoFangAIA Nov 30, 2022
426756e
resolve conflicts
yiliu30 Nov 30, 2022
31b11ff
remove useless code
yiliu30 Nov 30, 2022
5b813ea
update ut
yiliu30 Nov 30, 2022
152774f
remove WA for hawq loss
yiliu30 Nov 30, 2022
5174c80
remove hard code for baseline
yiliu30 Dec 1, 2022
c9a16ae
add efficientnet_b0_fx model
BiaoFangAIA Dec 1, 2022
a64c570
add act_qnt loss analysis
BiaoFangAIA Dec 1, 2022
81e04d5
comment some hard code for acc
yiliu30 Dec 2, 2022
4b201aa
Merge branch 'wenhua_hawq' of https://github.com/intel/neural-compres…
yiliu30 Dec 2, 2022
d7f0511
setting as disable act qnt loss analysis
BiaoFangAIA Dec 2, 2022
8a48f84
aligned the interface between adaptor and strategy
yiliu30 Dec 6, 2022
895cc20
add hawq metric logical
BiaoFangAIA Dec 6, 2022
cb8fd30
add call hawq function
BiaoFangAIA Dec 6, 2022
a550398
enable hawq interface
BiaoFangAIA Dec 6, 2022
eb05a1f
add strategy kwargs for new api
yiliu30 Dec 6, 2022
0afc168
fixed some bugs
yiliu30 Dec 6, 2022
b154e0c
add uts
yiliu30 Dec 6, 2022
1f5c859
remove the line for debug
yiliu30 Dec 6, 2022
fe03b25
delete some unused code
BiaoFangAIA Dec 6, 2022
be4f5a2
enable model.eval() first
BiaoFangAIA Dec 6, 2022
b0b697c
remove some useless lines
yiliu30 Dec 6, 2022
9633ebd
fixed some uts
yiliu30 Dec 6, 2022
0993195
add optimization_level in BaseQuantizationConfig
yiliu30 Dec 6, 2022
087bdc6
add optimization_level to conf and pythonic_conf
yiliu30 Dec 6, 2022
1a601d5
Merge remote-tracking branch 'origin/master' into wenhua_hawq
yiliu30 Dec 7, 2022
ebf875e
Merge remote-tracking branch 'origin/ly/newapi_st' into wenhua_hawq
yiliu30 Dec 7, 2022
75bd44c
rename test filename
yiliu30 Dec 7, 2022
1cc224e
remove some incorrect comments
yiliu30 Dec 7, 2022
8390d3a
remove UTs based on old API(YAML)
yiliu30 Dec 7, 2022
73c634f
remove some unused code
yiliu30 Dec 7, 2022
2aabc2c
add some comments
yiliu30 Dec 7, 2022
4e7a4a8
WA for mapping op
yiliu30 Dec 7, 2022
a3255bd
add efficientnet_b3_fx for test
yiliu30 Dec 7, 2022
7c9f5e2
Merge remote-tracking branch 'origin/master' into wenhua_hawq
yiliu30 Dec 7, 2022
5a36c59
support for adding hawq_v2 loss by new API
yiliu30 Dec 8, 2022
8c7aa58
remove some WA
yiliu30 Dec 8, 2022
971c723
Support 'Square', 'Sum', 'SparseSegmentSqrtN' BF16 ops in TensorFlow …
lvliang-intel Dec 7, 2022
4e7e7e2
Support Conv2D + BiasAdd + Relu + Sum fusion (#221)
lvliang-intel Dec 7, 2022
620c5f1
update azure pipeline (#229)
chensuyue Dec 7, 2022
7ffbbf1
Add export examples for new API (#225)
xin3he Dec 7, 2022
f9008e2
support for adding hawq_v2 loss by new API
yiliu30 Dec 8, 2022
0941d55
resolve conflicts
yiliu30 Dec 8, 2022
0d8f0e8
enable trace type Tensor->float
BiaoFangAIA Dec 8, 2022
8350241
cancel Max iter times for debugging
BiaoFangAIA Dec 8, 2022
cf1a7b4
Merge branch 'master' into wenhua_hawq
yiliu30 Dec 8, 2022
8b79938
revert change for test
yiliu30 Dec 8, 2022
04fc7ae
fixed some bugs
yiliu30 Dec 8, 2022
953d861
revert change for test
yiliu30 Dec 8, 2022
2e14eb1
add more log info
yiliu30 Dec 8, 2022
52ee89d
add skip first as arg
yiliu30 Dec 8, 2022
6aac6c5
fixed some format error
yiliu30 Dec 9, 2022
b8b76d7
resolve conflicts
yiliu30 Dec 9, 2022
e63195c
resolved the conflicts
yiliu30 Dec 9, 2022
36137c2
revert some change for test
yiliu30 Dec 9, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions examples/.config/model_params_pytorch.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,24 @@
"batch_size": 100,
"new_benchmark": false
},
"efficientnet_b0_fx": {
"model_src_dir": "image_recognition/torchvision_models/quantization/ptq/cpu/fx/",
"dataset_location": "/tf_dataset/pytorch/ImageNet/raw",
"input_model": "",
"yaml": "conf.yaml",
"strategy": "hawq_v2",
"batch_size": 100,
"new_benchmark": false
},
"efficientnet_b3_fx": {
"model_src_dir": "image_recognition/torchvision_models/quantization/ptq/cpu/fx/",
"dataset_location": "/tf_dataset/pytorch/ImageNet/raw",
"input_model": "",
"yaml": "conf.yaml",
"strategy": "hawq_v2",
"batch_size": 100,
"new_benchmark": false
},
"resnet18_fx": {
"model_src_dir": "image_recognition/torchvision_models/quantization/ptq/cpu/fx/",
"dataset_location": "/tf_dataset/pytorch/ImageNet/raw",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,4 @@ tuning:
relative: 0.01 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
exit_policy:
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
random_seed: 9527 # optional. random seed for deterministic tuning.
random_seed: 9527 # optional. random seed for deterministic tuning.
29 changes: 28 additions & 1 deletion neural_compressor/adaptor/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
from .query import QueryBackendCapability
from ..experimental.data.dataloaders.base_dataloader import BaseDataLoader


torch = LazyImport("torch")
json = LazyImport("json")
hvd = LazyImport("horovod.torch")
Expand Down Expand Up @@ -1094,6 +1093,34 @@ def is_fused_module(self, module):
return True
else:
return False

def calculate_hessian_trace(self,
fp32_model,
dataloader,
q_model,
criterion,
enable_act = False
):
"""Calculate hessian trace.

Args:
fp32_model: The original fp32 model.
criterion: The loss function for calculate the hessian trace. # loss = criterion(output, target)
dataloader: The dataloader for calculate the gradient.
q_model: The INT8 AMAP model.
enable_act: Enabling quantization error or not.

Return:
hessian_trace(Dict[Tuple, float]), key: (op_name, op_type); value: hessian trace.
"""
from .torch_utils.hawq_metric import hawq_top
op_to_traces=hawq_top(fp32_model=fp32_model,
dataloader=dataloader,
q_model=q_model,
criterion=criterion,
enable_act=enable_act)
return op_to_traces
pass


unify_op_type_mapping = {
Expand Down
Loading