A Max-Sum algorithm for training discrete neural networks

05/20/2015
by   Carlo Baldassi, et al.
0

We present an efficient learning algorithm for the problem of training neural networks with discrete synapses, a well-known hard (NP-complete) discrete optimization problem. The algorithm is a variant of the so-called Max-Sum (MS) algorithm. In particular, we show how, for bounded integer weights with q distinct states and independent concave a priori distribution (e.g. l_1 regularization), the algorithm's time complexity can be made to scale as O(N N) per node update, thus putting it on par with alternative schemes, such as Belief Propagation (BP), without resorting to approximations. Two special cases are of particular interest: binary synapses W∈{-1,1} and ternary synapses W∈{-1,0,1} with l_0 regularization. The algorithm we present performs as well as BP on binary perceptron learning problems, and may be better suited to address the problem on fully-connected two-layer networks, since inherent symmetries in two layer networks are naturally broken using the MS approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset