Skip to content

Commit 4c3a94e

Browse files
committed
update docstring
Signed-off-by: yuwenzho <[email protected]>
1 parent 50c710e commit 4c3a94e

File tree

1 file changed

+16
-6
lines changed

1 file changed

+16
-6
lines changed

neural_compressor/experimental/export/torch2onnx.py

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -366,8 +366,10 @@ def qdq_model_use_int32_bias(
366366
int8_onnx_model,
367367
quantize_nodes,
368368
):
369-
"""Export a QDQ model with recalculated int32 bias and remapped input scale and zero point
370-
for nn.quantized.Linear module.
369+
"""Excute post-process on QDQ int8 model with recipe 2.
370+
371+
Export a QDQ model with recalculated int32 bias and remapped input scale
372+
and zero point for nn.quantized.Linear module.
371373
372374
Args:
373375
int8_onnx_model (ModelProto): onnx int8 model to process.
@@ -425,7 +427,9 @@ def qdq_model_use_output_scale_zp(
425427
int8_onnx_model,
426428
quantize_nodes,
427429
):
428-
"""Export a QDQ model with FP32 bias and remapped in/output scale and zero point
430+
"""Excute post-process on QDQ int8 model with recipe 3.
431+
432+
Export a QDQ model with FP32 bias and remapped in/output scale and zero point
429433
for nn.quantized.Linear module.
430434
431435
Args:
@@ -465,7 +469,9 @@ def qdq_model_use_output_scale_zp(
465469
def qop_model_default(
466470
int8_onnx_model
467471
):
468-
"""Export a QOperator model with FP32 bias and remapped input scale and zero point
472+
"""Excute post-process on QOperator int8 model with recipe 1.
473+
474+
Export a QOperator model with FP32 bias and remapped input scale and zero point
469475
for nn.quantized.Linear module.
470476
471477
Args:
@@ -510,7 +516,9 @@ def qop_model_default(
510516
def qop_model_use_int32_bias(
511517
int8_onnx_model
512518
):
513-
"""Export a QOperator model with recalculated int32 bias and remapped input scale and zero point
519+
"""Excute post-process on QOperator int8 model with recipe 2.
520+
521+
Export a QOperator model with recalculated int32 bias and remapped input scale and zero point
514522
for nn.quantized.Linear module.
515523
516524
Args:
@@ -559,7 +567,9 @@ def qop_model_use_int32_bias(
559567
def qop_model_use_output_scale_zp(
560568
int8_onnx_model
561569
):
562-
"""Export a QOperator model with FP32 bias and remapped in/output scale and zero point
570+
"""Excute post-process on QOperator int8 model with recipe 3.
571+
572+
Export a QOperator model with FP32 bias and remapped in/output scale and zero point
563573
for nn.quantized.Linear module.
564574
565575
Args:

0 commit comments

Comments
 (0)