Resource Allocation in MEC-enabled Vehicular Networks: A Deep Reinforcement Learning Approach

2020 
Mobile edge computing (MEC) is a promising technique to liberate mobile vehicles from increasingly intensive computation workloads and improve the quality of computation experience. The computation-intensive and delay-sensitive applications in MEC-enabled vehicular networks require the strategy to reasonably allocate the computation resource and the transmission resource. In this paper, we tackle this problem by proposing a novel resource allocation algorithm based on the reinforcement learning. Specifically, we first construct one MEC-enabled vehicular network that supports the low latency communication between vehicles via the base station. By utilizing the deep deterministic policy gradient method, we design a realtime adaptive algorithm at the MEC server to allocate the computation resource and the transmission resource for task offloading. Specifically, the proposed algorithm is applicable for the continuous actions and it realizes the management of CPU cores and transmit power in MEC server under the constraints of latency and decoding error probability. Through simulation and comparison, it is shown that for different task arrival probability, the proposed algorithm is able to achieve better performance of the tasks offloading while consuming less energy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    1
    Citations
    NaN
    KQI
    []