Optimizing the Energy Consumption of Neural Networks

2020 
Embedded systems only have a limited amount of energy and in consequence embedded product design requires choosing low cost and low spec processors. However, such systems require fast response of software applications that implement algorithms of considerable complexity. Deep Learning models have a high energy consumption especially when performing complex calculations such as real time object recognition in images. Inference time together with energy consumption and accuracy are opposing optimization criteria and constitute a multi-objective optimization problem. We propose to use a methodology that can deal with the multiple objective optimization of Convolutional Neural Networks in regard to those aspects. The method uses the NSGA-III algorithm with customized operators to find an enhanced network architecture. Proof of concept is given by using the GTSRB dataset as benchmark. Results are promising and show that a practically relevant trade-off between accuracy and computing effort can be determined with the evolutionary approach presented here.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []