A Comprehensive Study on Post-Training Quantization for Large Language Models

03/15/2023
by   Zhewei Yao, et al.
0

Post-training quantization () had been recently shown as a compromising method to reduce memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study of those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bit precision is similar to 5 bits). We also present recommendations about how to utilize quantization for with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset