Data-Driven Randomized Learning of Feedforward Neural Networks

08/11/2019
by   Grzegorz Dudek, et al.
0

Randomized methods of neural network learning suffer from a problem with the generation of random parameters as they are difficult to set optimally to obtain a good projection space. The standard method draws the parameters from a fixed interval which is independent of the data scope and activation function type. This does not lead to good results in the approximation of the strongly nonlinear functions. In this work, a method which adjusts the random parameters, representing the slopes and positions of the sigmoids, to the target function features is proposed. The method randomly selects the input space regions, places the sigmoids in these regions and then adjusts the sigmoid slopes to the local fluctuations of the target function. This brings very good results in the approximation of the complex target functions when compared to the standard fixed interval method and other methods recently proposed in the literature.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset