Multiplier-less Artificial Neurons Exploiting Error Resiliency for Energy-Efficient Neural Computing
Large-scale artificial neural networks have shown significant promise in addressing a wide range of classification and recognition applications. However, their large computational requirements stretch the capabilities of computing platforms. The fundamental components of these neural networks are the neurons and its synapses. The core of a digital hardware neuron consists of multiplier, accumulator and activation function. Multipliers consume most of the processing energy in the digital neurons, and thereby in the hardware implementations of artificial neural networks. We propose an approximate multiplier that utilizes the notion of computation sharing and exploits error resilience of neural network applications to achieve improved energy consumption. We also propose Multiplier-less Artificial Neuron (MAN) for even larger improvement in energy consumption and adapt the training process to ensure minimal degradation in accuracy. We evaluated the proposed design on 5 recognition applications. The results show, 35 consumption, for neuron sizes of 8 bits and 12 bits, respectively, with a maximum of 2.83 implementation. We also achieve 37 of 8 bits and 12 bits, respectively, under iso-speed conditions.
READ FULL TEXT