diff --git a/docs/source/faq.md b/docs/source/faq.md index 949b8ce21fc..cc38ab1ea81 100644 --- a/docs/source/faq.md +++ b/docs/source/faq.md @@ -32,3 +32,19 @@ torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed [AutoGPTQ/AutoGPTQ#196](https://github.com/AutoGPTQ/AutoGPTQ/issues/196). Try increasing `percdamp` (percent of the average Hessian diagonal to use for dampening), or increasing `nsamples` (the number of calibration samples). +#### Issue 7: +If you run GPTQ quantization with transformers-like API on xpu device, then you may encounter the following error: +```shell +[ERROR][modeling_auto.py:128] index 133 is out of bounds for dimension 0 with size 128 +[ERROR][modeling_auto.py:129] Saved low bit model loading failed, please check your model. +HINT: +XPU device does not support `g_idx` for GPTQ quantization now. Please stay tuned. +You can set desc_act=False. +``` +#### Issue 8: +UnicodeEncodeError: 'charmap' codec can't encode character '\u2191' in position 195: character maps to +**Solution:** +``` +set PYTHONIOENCODING=UTF-8 # for windows +export PYTHONIOENCODING=UTF-8 # for linux +```