Deep reinforcement learning based task scheduling scheme in mobile edge computing network

2021 
Mobile edge computing is a new distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth in the dynamic mobile networking environment. Despite the improvements in network technology, data centers cannot always guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. The aim of mobile edge computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. In this paper, we design a task offloading scheme in the mobile edge network to handle the task distribution, offloading and management by applying deep reinforcement learning. Specifically, we formulate the task offloading problem as a multi-agent reinforcement learning problem. The decision-making process of each agent is modeled as a Markov decision process and deep Q-learning approach is applied to deal with the large scale of states and actions. To evaluate the performance of our proposed scheme, we develop a simulation environment for the mobile edge computing scenario. Our preliminary evaluation results with a simplified multi-armed bandit model indicate that our proposed solution can provide lower latency for the computational intensive tasks in mobile edge network, and outperforms than naive task offloading method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []