Selective Memory Recursive Least Squares: Uniformly Allocated Approximation Capabilities of RBF Neural Networks in Real-Time Learning
When performing real-time learning tasks, the radial basis function neural network (RBFNN) is expected to make full use of the training samples such that its learning accuracy and generalization capability are guaranteed. Since the approximation capability of the RBFNN is finite, training methods with forgetting mechanisms such as the forgetting factor recursive least squares (FFRLS) and stochastic gradient descent (SGD) methods are widely used to maintain the learning ability of the RBFNN to new knowledge. However, with the forgetting mechanisms, some useful knowledge will get lost simply because they are learned a long time ago, which we refer to as the passive knowledge forgetting phenomenon. To address this problem, this paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the feature space of the RBFNN is evenly discretized into a finite number of partitions and a synthesized objective function is developed to replace the original objective function of the ordinary recursive least squares (RLS) method. SMRLS is featured with a memorization mechanism that synthesizes the samples within each partition in real-time into representative samples uniformly distributed over the feature space, and thus overcomes the passive knowledge forgetting phenomenon and improves the generalization capability of the learned knowledge. Compared with the SGD or FFRLS methods, SMRLS achieves improved learning performance (learning speed, accuracy and generalization capability), which is demonstrated by corresponding simulation results.
READ FULL TEXT