S2DNAS: Transforming Static CNN Model for Dynamic Inference via Neural Architecture Search
2020
Recently, dynamic inference has emerged as a promising way to reduce the computational cost of deep convolutional neural networks (CNNs). In contrast to static methods (e.g., weight pruning), dynamic inference adaptively adjusts the inference process according to each input sample, which can considerably reduce the computational cost on “easy” samples while maintaining the overall model performance.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
50
References
2
Citations
NaN
KQI