TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNT

04/06/2021
by   Hamed F. Langroudi, et al.
0

In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models. We introduce a tapered fixed-point quantization algorithm that matches the numerical format's dynamic range and distribution to that of the deep neural network model's parameter distribution at each layer. An accelerator architecture for the tapered fixed-point with TENT framework is proposed. Results show that the accuracy on classification tasks improves up to  31 with an energy overhead of  17-30 ResNet-18 models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset