Energy-Quality Scalable Monocular Depth Estimation on Low-Power CPUs

2021 
The recent advancements in deep learning have demonstrated that inferring high-quality depth maps from a single image has become feasible and accurate thanks to Convolutional Neural Networks (CNNs), but how to process such compute- and memory-intensive models on portable and lowpower devices remains a concern. Dynamic energy-quality scaling is an interesting yet less explored option in this field. It can improve efficiency through opportunistic computing policies where performances are boosted only when needed, achieving on average substantial energy savings. Implementing such a computing paradigm encompasses the availability of a scalable inference model, which is the target of this work. Specifically, we describe and characterize the design of an Energy-Quality scalable Pyramidal Network (EQPyD-Net), a lightweight CNN capable of modulating at run time the computational effort with minimal memory resources. We describe the architecture of the network and the optimization flow, covering the important aspects that enable the dynamic scaling, namely, the optimized training procedures, the compression stage via fixedpoint quantization, and the code optimization for the deployment on commercial low-power CPUs adopted in the edge segment. To assess the effect of the proposed design knobs, we evaluated the prediction quality on the standard KITTI dataset and the energy and memory resources on the ARM Cortex-A53 CPU. The collected results demonstrate the flexibility of the proposed network and its energy efficiency. EQPyD-Net can be shifted across five operating points, ranging from a maximum accuracy of 82.2% with 0:4 Frame=J and up to 92.6% of energy savings with 6.1% of accuracy loss, still keeping a compact memory footprint of 5:2MB for the weights and 38:3MB (in the worstcase) for the processing.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []