Neural Architecture Tuning with Policy Adaptation

2021 
Abstract Neural architecture search (NAS) is to automatically design task-specific neural architectures, whose performance has already surpassed those of many manually designed neural networks. Existing NAS techniques focus on searching for the neural architecture and training the optimal network weights from the scratch. Nevertheless, it could be essential to study how to tune a given neural architecture instead of producing a completely new neural architecture in some scenarios, which may lead to a more optimal solution by combining human experience and the advantages of the machine’s automatic searching. This paper proposes to learn to tune the architectures at hand to achieve better performance. The proposed Neural Architecture Tuning (NAT) algorithm trains a deep Q-network to tune neural architectures given a random architecture so that we can achieve better performance on a reduced space. We then apply adversarial autoencoder to make the learned policy be generalized to a different searching space in real-world applications. The proposed algorithm is evaluated on the NAS-Bench-101 dataset. The results indicate that our NAT framework can achieve state-of-the-art performance on the NAS-Bench-101 benchmark, and the learned policy can be adapted to a different search space while maintaining the performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []