Hybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron

12/08/2012
by   Mriganka Chakraborty, et al.
0

Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron.[13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset