Adaptive Consensus-Based Ensemble for Improved Deep Learning Inference Cost

2021 
Deep learning models are continuously improving the state-of-the-art in nearly every domain, achieving increased levels of accuracy. To sustain, however, this performance, these models have become larger and more computationally intensive at a staggering rate. Using an ensemble of deep learning models to improve the accuracy (in comparison to running a single model) is a well-known approach, but using it in real-world settings is challenging due to its exuberant inference computational cost. In this paper we present a novel method for reducing the cost associated with an ensemble of models by \(\sim \)50% on average while maintaining comparable accuracy. The method proposed is simple to implement, and is fully agnostic to the model and the problem domain. The experimental results presented demonstrate that our method can be used in a number of configurations, all of which provide a much better “performance per cost” than standard ensembles, whether using an ensemble of N instances of the same model architecture (trained from scratch each time), or an ensemble of completely different models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []