Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent Neural Networks

11/20/2020
by   Arnaud Fanthomme, et al.
0

We study the learning dynamics and the representations emerging in Recurrent Neural Networks trained to integrate one or multiple temporal signals. Combining analytical and numerical investigations, we characterize the conditions under which a RNN with n neurons learns to integrate D(n) scalar signals of arbitrary duration. We show, both for linear and ReLU neurons, that its internal state lives close to a D-dimensional manifold, whose shape is related to the activation function. Each neuron therefore carries, to various degrees, information about the value of all integrals. We discuss the deep analogy between our results and the concept of mixed selectivity forged by computational neuroscientists to interpret cortical recordings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset