Learning with Long-term Remembering: Following the Lead of Mixed Stochastic Gradient

09/25/2019
by   Yunhui Guo, et al.
0

Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lifelong learning. The central question in lifelong learning is how to enable deep neural networks to maintain performance on old tasks while learning a new task.In this paper, we introduce a novel and effective lifelong learning algorithm, calledMixEd stochastic GrAdient (MEGA), which allows deep neural networks to ac-quire the ability of retaining performance on old tasks while learning new tasks.MEGA modulates the balance between old tasks and the new task by integrating the current gradient with the gradient computed on a small reference episodic memory. Extensive experimental results show that the proposed MEGA algorithm significantly advances the state-of-the-art on all four commonly used life-long learning benchmarks, reducing the error by up to 18

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset