Skip to content

Commit c4ae066

Browse files
ulivnelinoybu
andauthored
[SW-224836] disable test_mixed_precision_gptq_fp8_quant_only_nlp (#208)
Co-authored-by: linoy buchnik <[email protected]>
1 parent 6168a75 commit c4ae066

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

test/3x/torch/quantization/fp8_quant/test_gptq_mixed_precision.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ def teardown_class(self):
6363
Calibrates the w4a16 model on N num of samples, and compares the same samples with the hybrid model.
6464
"""
6565
@torch.no_grad()
66+
@pytest.mark.skip(reason="This test is temporarily disabled SW-224890")
6667
def test_mixed_precision_gptq_fp8_quant_only_nlp(self):
6768
from neural_compressor.common import set_random_seed
6869
set_random_seed(12345)

0 commit comments

Comments
 (0)