Seamless computation offloading for mobile applications using an online learning algorithm

2021 
Although recent developments in the hardware of mobile devices, such as processor and memory capacity have increased their capabilities, they are still not comparable to cloud servers. The capacity constraints of mobile devices can be overcome by having the computing intensive work of mobile applications performed on powerful local or cloud servers. One of the important aspects of computation offloading is the decision process; this is determined by the costs of running the computation intensive components at run time on the server or at the local. This study proposes a novel hybrid model. An object dependency graph was created by gathering data from the mobile device at run time. This graph was partitioned with a novel model to determine the offloadable parts, which were then sent to the server using an online learning algorithm. Mobile applications were implemented on Android OS to verify the hybrid model. Properly making the offloading decision improved the application performance and decreased the battery consumption. Our algorithm has yielded better results than existing studies. The response time was saved by 2–73% and energy was reduced by 16–44% through offloading the computation intensive parts of mobile applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []