A comparison of LSTM and GRU networks for learning symbolic sequences

07/05/2021
by   Roberto Cahuantzi, et al.
0

We explore relations between the hyper-parameters of a recurrent neural network (RNN) and the complexity of string sequences it is able to memorize. We compare long short-term memory (LSTM) networks and gated recurrent units (GRUs). We find that an increase of RNN depth does not necessarily result in better memorization capability when the training time is constrained. Our results also indicate that the learning rate and the number of units per layer are among the most important hyper-parameters to be tuned. Generally, GRUs outperform LSTM networks on low complexity sequences while on high complexity sequences LSTMs perform better.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset