A Mixture of Experts Approach for Low-Cost DNN Customization

2020 
Editor’s notes: This article introduces a novel architecture for local learning, namely mixture of experts (MoEs), to customize a deep neural network (DNN) that is deployed on an edge device with performance improvement and low implementation overhead. — Deming Chen, University of Illinois at Urbana-Champaign
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    0
    Citations
    NaN
    KQI
    []