Incremental learning model inspired in Rehearsal for deep convolutional networks

2020 
Abstract In Deep Learning, training a model properly with a high quantity and quality of data is crucial in order to achieve a good performance. In some tasks, however, the necessary data is not available at a particular moment and only becomes available over time. In which case, incremental learning is used to train the model correctly. An open problem remains, however, in the form of the stability–plasticity dilemma: how to incrementally train a model that is able to respond well to new data (plasticity) while also retaining previous knowledge (stability). In this paper, an incremental learning model inspired in Rehearsal (recall of past memories based on a subset of data) named CRIF is proposed, and two instances for the framework are employed — one using a random-based selection of representative samples (Naive Incremental Learning, NIL), the other using Crowding Distance and Best vs. Second Best metrics in conjunction for this task (RILBC). The experiments were performed on five datasets — MNIST, Fashion-MNIST, CIFAR-10, Caltech 101, and Tiny ImageNet, in two different incremental scenarios: a strictly class-incremental scenario, and a pseudo class-incremental scenario with unbalanced data. In Caltech 101, Transfer Learning was used, and in this scenario as well as in the other three datasets, the proposed method, NIL, achieved better results in most of the quality metrics than comparison algorithms such as RMSProp Inc (base line) and iCaRL (state-of-the-art proposal) and outperformed the other proposed method, RILBC. NIL also requires less time to achieve these results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    7
    Citations
    NaN
    KQI
    []