A Reinforcement Model Based Prioritized Replay to Solve the Offloading Problem in Edge Computing.

2021 
Mobile edge computing is widely applied to help mobile devices to improve its data processing speed. However, one of its main challenges is how to generate computation offloading decision effectively and quickly in the complex wireless scenario. In this paper, we aim to build up a multiple user devices application scenario, where each device performs binary computation offloading policy which is executed in the local device or offloaded to a cloud server via the wireless network. A model based on deep reinforcement learning is proposed to optimize computation offloading decisions. First, the weighted rate of offloading computation is introduced to be a reward in the Q function. Second, offloading decisions are generated from a deep Q-network (DQN) with batch normalization layers. At last, the deep Q-network is trained with a designed prioritized replay policy. Experimental results indicate the proposed model generates the optimal offloading decisions in a short time and gets faster convergence speed on the weighted rate.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []