Learning and evolution of genetic network programming with knowledge transfer

2014 
Traditional evolutionary algorithms (EAs) generally starts evolution from scratch, in other words, randomly. However, this is computationally consuming, and can easily cause the instability of evolution. In order to solve the above problems, this paper describes a new method to improve the evolution efficiency of a recently proposed graph-based EA - genetic network programming (GNP) - by introducing knowledge transfer ability. The basic concept of the proposed method, named GNP-KT, arises from two steps: First, it formulates the knowledge by discovering abstract decision-making rules from source domains in a learning classifier system (LCS) aspect; Second, the knowledge is adaptively reused as advice when applying GNP to a target domain. A reinforcement learning (RL)-based method is proposed to automatically transfer knowledge from source domain to target domain, which eventually allows GNP-KT to result in better initial performance and final fitness values. The experimental results in a real mobile robot control problem confirm the superiority of GNP-KT over traditional methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    3
    Citations
    NaN
    KQI
    []