Poster Abstract: Edge-Caches for Vision Applications

2016 
One of the thrusts of mobile and pervasive computing is supporting vision-based perception applications. Vision-based applications, such as augmented reality, are those that help users augment their understanding of the physical world through the camera(s) on their mobile devices. Such applications need to provide a seamless experience and hence require minimal end-to-end latency. However, these applications cannot be executed entirely on the devices. The recognition algorithms that these applications utilize, such as feature extraction and matching, need intensive computation and access to "big data", such as large labeled image datasets, for them to be fast and accurate. Such data and computational resources are not available locally on the device. Hence, they rely on offloading intensive tasks to the cloud. The devices send captured images to the cloud, which then executes the recognition algorithms using its computational resources and access to big data. However, the heavy computation and the added communication latency still deter seamless interaction, which is desired for such applications. Thus, there is a need to accelerate the performance of vision-based mobile applications. One suggested approach towards fulfilling this need has been to place more compute resources at the edge. We propose to efficiently utilize these edge-servers, complement them with mobile edge-clouds and vertically integrate mobile, edge and cloud, through dynamic edge-caching to deliver low-latency vision-based perception applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    2
    Citations
    NaN
    KQI
    []