Variability-Aware Training and Self-Tuning of Highly Quantized DNNs for Analog PIM

11/11/2021
by   Zihao Deng, et al.
0

DNNs deployed on analog processing in memory (PIM) architectures are subject to fabrication-time variability. We developed a new joint variability- and quantization-aware DNN training algorithm for highly quantized analog PIM-based models that is significantly more effective than prior work. It outperforms variability-oblivious and post-training quantized models on multiple computer vision datasets/models. For low-bitwidth models and high variation, the gain in accuracy is up to 35.7 We demonstrate that, under a realistic pattern of within- and between-chip components of variability, training alone is unable to prevent large DNN accuracy loss (of up to 54 DNN architecture that dynamically adjusts layer-wise activations during inference and is effective in reducing accuracy loss to below 10

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset