Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile Edge Computing

2021 
Mobile Edge Computing (MEC) has emerged as a promising computing paradigm in the 5G architecture, which can empower User Equipments (UEs) with computation and energy resources offered by migrating workloads from UEs to the nearby MEC servers. Although the issues of computation offloading and resource allocation in MEC have been studied with different optimization objectives, they mainly focus on facilitating the performance in the quasi-static system, and seldomly consider time-varying system conditions in the time domain. In this paper, we investigate the joint optimization of computation offloading and resource allocation in a dynamic multi-user MEC system. Our objective is to minimize the energy consumption of the entire MEC system, by considering the delay constraint as well as the uncertain resource requirements of heterogeneous computation tasks. We formulate the problem as a Mixed Integer Non-Linear Programming (MINLP) problem, and propose a value iteration based Reinforcement Learning (RL) method, named Q-Learning, to determine the joint policy of computation offloading and resource allocation. To avoid the curse of dimensionality, we further propose a Double Deep Q Network (DDQN) based method, which can efficiently approximate the value function of Q-learning. Simulation results demonstrate that the proposed methods significantly outperform other baseline methods in different scenarios, except the Exhaustion method. Especially, the proposed DDQN based method achieves very close performance with the Exhaustion method, and can significantly reduce the average of 20%, 35%, 53% energy consumption compared with Offloading Decision, Local First method, and Offloading First method, respectively when the number of UEs is 5.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []