Adaptive Progressive Continual Learning.

2021 
Continual learning paradigm learns from a continuous stream of tasks in an incremental manner and aims to overcome the notorious issue: the catastrophic forgetting. In this work, we propose a new adaptive progressive network framework including two models for continual learning: Reinforced Continual Learning (RCL) and Bayesian Optimized Continual Learning with Attention mechanism (BOCL) to solve this fundamental issue. The core idea of this framework is to dynamically and adaptively expand the neural network structure upon the arrival of new tasks. RCL and BOCL employ reinforcement learning and Bayesian optimization to achieve it, respectively. An outstanding advantage of our proposed framework is that it will not forget the knowledge that has been learned through adaptively controlling the architecture. We propose effective ways of employing the learned knowledge in the two methods to control the size of the network. RCL employs previous knowledge directly while BOCL selectively utilizes previous knowledge (e.g. feature maps of previous tasks) via attention mechanism. The experiments on variants of MNIST, CIFAR-100 and Sequence of 5-Datasets demonstrate that our methods outperform the state-of-the-art in preventing catastrophic forgetting and fitting new tasks better under the same or less computing resource.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []