Skip to content

Commit eabcc64

Browse files
ulivnelinoybu
authored andcommitted
[SW-224836] disable test_mixed_precision_gptq_fp8_quant_only_nlp (#208)
Co-authored-by: linoy buchnik <[email protected]> Signed-off-by: Xin He <[email protected]>
1 parent b3e67dd commit eabcc64

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

test/3x/torch/quantization/fp8_quant/test_gptq_mixed_precision.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ def teardown_class(self):
6363
Calibrates the w4a16 model on N num of samples, and compares the same samples with the hybrid model.
6464
"""
6565
@torch.no_grad()
66-
@pytest.mark.skip(reason="SW-223106 load model error")
66+
@pytest.mark.skip(reason="This test is temporarily disabled SW-224890")
6767
def test_mixed_precision_gptq_fp8_quant_only_nlp(self):
6868
from neural_compressor.common import set_random_seed
6969
set_random_seed(12345)

0 commit comments

Comments
 (0)