Skip to content

Commit a76d154

Browse files
committed
Change symmetry to True to generate golden data with zp=0
1 parent 46ccaf8 commit a76d154

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

tensorflow/lite/micro/kernels/testdata/lstm_test_data_generator.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,15 @@
1717
2. Print the intermediate step outputs inside the LSTM for a single step LSTM invocation (Get2X2GateOutputCheckData in .cc)
1818
3. Print the outputs for multi-step LSTM invocation (Get2X2LstmEvalCheckData in .cc)
1919
20-
Every invocation gives three types information:
21-
1. Quantized output: kernel output in integer
20+
Every invocation gives three types information:
21+
1. Quantized output: kernel output in integer
2222
2. Dequantized output: Quantized output in floating point representation
2323
3. Float output: output from the floating point computation (i.e., float kernel)
2424
25-
Note:
25+
Note:
2626
1. Change quantization settings in _KERNEL_CONFIG to see the outcomes from various quantization schema (e.g., 8x8 Vs. 16x8)
2727
2. Only single batch inference is supporte here. Change _GATE_TEST_DATA or _MULTISTEP_TEST_DATA to see kernel outputs on different input data
28-
3. The quantization computation here is not the exact as the c++ implementation. The integer calculation is mimiced here using floating point.
28+
3. The quantization computation here is not the exact as the c++ implementation. The integer calculation is emulated here using floating point.
2929
No fixed point math is implemented here. The purpose is to illustrate the computation procedure and possible quantization error accumulation, not for bit exactness.
3030
"""
3131
from absl import app
@@ -38,7 +38,7 @@
3838
_KERNEL_CONFIG = {
3939
'quantization_settings': {
4040
'weight_bits': 8,
41-
'activation_bits': 8,
41+
'activation_bits': 16,
4242
'bias_bits': 32,
4343
'cell_bits': 16,
4444
},
@@ -88,7 +88,7 @@
8888
_MULTISTEP_TEST_DATA = {
8989
'init_hidden_state_vals': [0, 0],
9090
'init_cell_state_vals': [0, 0],
91-
'input_data': [0.2, 0.3, 0.2, 0.3, 0.2, 0.3], # three time steps
91+
'input_data': [0.2, 0.3, 0.2, 0.3, 0.2, 0.3], # three time steps
9292
'hidden_state_range': (-0.5, 0.7),
9393
'cell_state_range': [-8, 8],
9494
'input_data_range': [-1, 1]

tensorflow/lite/micro/kernels/testdata/lstm_test_data_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@ def __init__(
346346
np.array(init_hidden_state_vals).reshape((-1, 1)),
347347
hiddens_state_range[0],
348348
hiddens_state_range[1],
349-
False,
349+
True,
350350
self.quantization_settings['activation_bits'],
351351
)
352352
self.cell_state_tensor = assemble_quantized_tensor(

0 commit comments

Comments
 (0)