Training neural networks with structured noise improves classification and generalization
The beneficial role of noise in learning is nowadays a consolidated concept in the field of artificial neural networks. The training-with-noise algorithm proposed by Gardner and collaborators is an emblematic example of a noise injection procedure in recurrent networks. We show how adding structure into noisy training data can substantially improve memory performance, allowing to approach perfect classification and maximal basins of attraction. We also prove that the so-called unlearning rule coincides with the training-with-noise algorithm when noise is maximal and data are fixed points of the network dynamics. Moreover, a sampling scheme for optimal noisy data is proposed and implemented to outperform both the training-with-noise and the unlearning procedures.
READ FULL TEXT