Towards Efficient Execution of Mainstream Deep Learning Frameworks on Mobile Devices: Architectural Implications

2020 
In recent years, continuous growing interests have been seen in bringing artificial intelligence capabilities to mobile devices. However, the related work still faces several issues, such as constrained computation and memory resources, power drain, and thermal limitation. To develop deep learning (DL) algorithms on mobile devices, we need to understand their behaviors. In this article, we explore the architectural behaviors of some mainstream DL frameworks on mobile devices by performing a comprehensive characterization of performance, accuracy, energy efficiency, and thermal behaviors. We experimentally choose four model compression methods to perform on networks and in addition, analyze the related impact on the nodes amount, memory, execution time, model size, inference time, energy consumption, and thermal distribution. With insights into DL-based mobile application characteristics, we hope to guide the design of future smartphone platforms for lower energy consumption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []