Injectivity of ReLU networks: perspectives from statistical physics

02/27/2023
by   Antoine Maillard, et al.
0

When can the input of a ReLU neural network be inferred from its output? In other words, when is the network injective? We consider a single layer, x ↦ReLU(Wx), with a random Gaussian m × n matrix W, in a high-dimensional setting where n, m →∞. Recent work connects this problem to spherical integral geometry giving rise to a conjectured sharp injectivity threshold for α = m/n by studying the expected Euler characteristic of a certain random set. We adopt a different perspective and show that injectivity is equivalent to a property of the ground state of the spherical perceptron, an important spin glass model in statistical physics. By leveraging the (non-rigorous) replica symmetry-breaking theory, we derive analytical equations for the threshold whose solution is at odds with that from the Euler characteristic. Furthermore, we use Gordon's min–max theorem to prove that a replica-symmetric upper bound refutes the Euler characteristic prediction. Along the way we aim to give a tutorial-style introduction to key ideas from statistical physics in an effort to make the exposition accessible to a broad audience. Our analysis establishes a connection between spin glasses and integral geometry but leaves open the problem of explaining the discrepancies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset