Deep Invertible Approximation of Topologically Rich Maps between Manifolds

10/02/2022
by   Michael Puthawala, et al.
0

How can we design neural networks that allow for stable universal approximation of maps between topologically interesting manifolds? The answer is with a coordinate projection. Neural networks based on topological data analysis (TDA) use tools such as persistent homology to learn topological signatures of data and stabilize training but may not be universal approximators or have stable inverses. Other architectures universally approximate data distributions on submanifolds but only when the latter are given by a single chart, making them unable to learn maps that change topology. By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form 𝒯∘ p ∘ℰ, where ℰ is an injective network, p a fixed coordinate projection, and 𝒯 a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in ℝ^n. We emphasize the case when the target map changes topology. Further, we find that by constraining the projection p, multivalued inversions of our networks can be computed without sacrificing universality. As an application, we show that learning a group invariant function with unknown group action naturally reduces to the question of learning local diffeomorphisms for finite groups. Our theory permits us to recover orbits of the group action. We also outline possible extensions of our architecture to address molecular imaging of molecules with symmetries. Finally, our analysis informs the choice of topologically expressive starting spaces in generative problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset