Variation Aware Training of Hybrid Precision Neural Networks with 28nm HKMG FeFET Based Synaptic Core

02/21/2022
by   Sunanda Thunder, et al.
0

This work proposes a hybrid-precision neural network training framework with an eNVM based computational memory unit executing the weighted sum operation and another SRAM unit, which stores the error in weight update during back propagation and the required number of pulses to update the weights in the hardware. The hybrid training algorithm for MLP based neural network with 28 nm ferroelectric FET (FeFET) as synaptic devices achieves inference accuracy up to 95 evaluated using behavioral or macro-model of FeFET devices with experimentally calibrated device variations and we have achieved accuracies compared to floating-point implementations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset