Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler

11/15/2017
by   Yu Ji, et al.
0

Different from training common neural networks (NNs) for inference on general-purpose processors, the development of NNs for neuromorphic chips is usually faced with a number of hardware-specific restrictions, including the limited precision of network signals and parameters, constrained computation scale and limited types of non-linear functions. This paper proposes a general methodology to address the challenge. It can transform an existing trained, unrestricted NN (usually for software execution substrate) into an equivalent network that meets the given hardware constraints, which decouples NN applications from target hardware. Formally, the original NN is expressed as a computational graph (CG) that would be fine-tuned part by part according to a topological ordering to become the target CG. Quite a few techniques, including the multilayer-perceptron(MLP)-based universal approximator, a data re-encoding method, a split-and-merge network reconstruction method and a multi-phase weight-tuning algorithm, are proposed to conquer the above restrictions respectively. We have built such a software tool that supports both spiking neural networks (SNNs) and traditional artificial neural networks (ANNs). Its effectiveness has been demonstrated with a real neuromorphic chip and a processing-in-memory (PIM) design. Tests show that the extra inference error caused by this solution is very limited and the transformation time is much less than the retraining time. In addition, quite a few parameter-sensitivity evaluations have been completed to explore the tradeoff between network error, resource consumption and different transformation strategies, which could provide insights for co-design optimization of neuromorphic hardware and software.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset