Adaptively Transferring Deep Neural Networks with a Hybrid Evolution Strategy

2020 
Recent years have witnessed the success of deep learning in many fields. Commonly, the deep neural networks are trained by gradient-based methods, which is however ineffective in some cases when the optimization landscapes contain many local optima. In this study, we propose a novel optimization approach that combines neuroevolution with gradient-based method, which possesses the advantages of global search and fast convergence. The main challenge is the high expense of network training, especially when the network structure becomes deeper. This motivates us to utilize the concept of transfer learning which borrows knowledge from a source domain to enhance the learning ability in a target domain. Unfortunately, the design of transfer learning strategies for specific scenarios usually requires external expert knowledge. We therefore propose an adaptive transfer system (ATS) based on dataset similarity, which adaptively adjusts the transferring and retraining modules according to the similarity of the source and target tasks. Empirical studies on image classification problems demonstrate the effectiveness of the proposed algorithm. We are the first attempt to show that the neuroevolution can be successfully applied to deep transfer learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []