Surrogate Gradients Design

02/01/2022
by   Luca Herranz-Celotti, et al.
0

Surrogate gradient (SG) training provides the possibility to quickly transfer all the gains made in deep learning to neuromorphic computing and neuromorphic processors, with the consequent reduction in energy consumption. Evidence supports that training can be robust to the choice of SG shape, after an extensive search of hyper-parameters. However, random or grid search of hyper-parameters becomes exponentially unfeasible as we consider more hyper-parameters. Moreover, every point in the search can itself be highly time and energy consuming for large networks and large datasets. In this article we show how complex tasks and networks are more sensitive to SG choice. Secondly, we show how low dampening, high sharpness and low tail fatness are preferred. Thirdly, we observe that Glorot Uniform initialization is generally preferred by most SG choices, with variability in the results. We finally provide a theoretical solution to reduce the need of extensive gridsearch, to find SG shape and initializations that result in improved accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset