Universal Approximation Under Constraints is Possible with Transformers

10/07/2021
by   Anastasis Kratsios, et al.
0

Many practical problems need the output of a machine learning model to satisfy a set of constraints, K. Nevertheless, there is no known guarantee that classical neural network architectures can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any non-convex compact set K and any continuous function f:ℝ^n→ K, there is a probabilistic transformer F̂ whose randomized outputs all lie in K and whose expected output uniformly approximates f. Our second main result is a "deep neural version" of Berge's Maximum Theorem (1963). The result guarantees that given an objective function L, a constraint set K, and a family of soft constraint sets, there is a probabilistic transformer F̂ that approximately minimizes L and whose outputs belong to K; moreover, F̂ approximately satisfies the soft constraints. Our results imply the first universal approximation theorem for classical transformers with exact convex constraint satisfaction. They also yield that a chart-free universal approximation theorem for Riemannian manifold-valued functions subject to suitable geodesically convex constraints.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset