Distribution-Agnostic Model-Agnostic Meta-Learning

02/12/2020
by   Liam Collins, et al.
21

The Model-Agnostic Meta-Learning (MAML) algorithm <cit.> has been celebrated for its efficiency and generality, as it has demonstrated success in quickly learning the parameters of an arbitrary learning model. However, MAML implicitly assumes that the tasks come from a particular distribution, and optimizes the expected (or sample average) loss over tasks drawn from this distribution. Here, we amend this limitation of MAML by reformulating the objective function as a min-max problem, where the maximization is over the set of possible distributions over tasks. Our proposed algorithm is the first distribution-agnostic and model-agnostic meta-learning method, and we show that it converges to an ϵ-accurate point at the rate of O(1/ϵ^2) in the convex setting and to an (ϵ, δ)-stationary point at the rate of O(max{1/ϵ^5, 1/δ^5}) in nonconvex settings. We also provide numerical experiments that demonstrate the worst-case superiority of our algorithm in comparison to MAML.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset