EvoNAS: Evolvable Neural Architecture Search for Hyperspectral Unmixing

2021 
Owing to the powerful ability in learning low-dimensional representations and reconstruction, autoencoders (AEs) have been successfully applied in hyperspectral unmixing (HU). However, AE-based unmixing architectures, to a great extent, need to be carefully designed in a manual fashion, leading to the bulk of costs in manpower and time. To unmix hyperspectral images more intelligently, we propose an AI-powered evolvable neural architecture search method for HU, EvoNAS for short, to optimally determine the network architecture by the means of the evolutionary algorithm instead of gradient-based or reinforcement learning-based rewards. In EvoNAS, a supernet with all candidate architectures is first trained to learn the unmixing mapping in a self-supervised manner. The optimal network is then constructed by evaluating unmixing results of different architectures in the supernet. EvoNAS is capable of saving tremendous computational cost, since it inherits the weights of the pre-trained supernet and avoids training from scratch during the search phase. Experimental results conducted on two real hyperspectral datasets verify the effectiveness and superiority of the EvoNAS and show the huge potential of the NAS for HU.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []