A Robust Backpropagation-Free Framework for Images

06/03/2022
by   Timothy Zee, et al.
0

While current deep learning algorithms have been successful for a wide variety of artificial intelligence (AI) tasks, including those involving structured image data, they present deep neurophysiological conceptual issues due to their reliance on the gradients computed by backpropagation of errors (backprop) to obtain synaptic weight adjustments; hence are biologically implausible. We present a more biologically plausible approach, the error-kernel driven activation alignment (EKDAA) algorithm, to train convolution neural networks (CNNs) using locally derived error transmission kernels and error maps. We demonstrate the efficacy of EKDAA by performing the task of visual-recognition on the Fashion MNIST, CIFAR-10 and SVHN benchmarks as well as conducting blackbox robustness tests on adversarial examples derived from these datasets. Furthermore, we also present results for a CNN trained using a non-differentiable activation function. All recognition results nearly matches that of backprop and exhibit greater adversarial robustness compared to backprop.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset