Nonparametric Bayesian Structure Adaptation for Continual Learning

12/08/2019
by   Abhishek Kumar, et al.
0

Continual Learning is a learning paradigm where machine learning models are trained with sequential or streaming tasks. Two notable directions among the recent advances in continual learning with neural networks are (i) variational Bayes based regularization by learning priors from previous tasks, and, (ii) learning the structure of deep networks to adapt to new tasks. So far, these two approaches have been orthogonal. We present a principled non-parametric Bayesian approach for learning the structure of feed-forward neural networks, addressing the shortcomings of both these approaches. In our model, the number of nodes in each hidden layer can automatically grow with the introduction of each new task, and inter-task transfer occurs through the overlapping of different sparse subsets of weights learned by different tasks. On benchmark datasets, our model performs comparably or better than the state-of-the-art approaches, while also being able to adaptively infer the evolving network structure in the continual learning setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset