Differentiable Implicit Layers

10/14/2020
by   Andreas Look, et al.
0

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset