Vehicular Edge Computing via Deep Reinforcement Learning.
2019
The smart vehicles construct Vehicle of Internet which can execute various intelligent services. Although the computation capability of the vehicle is limited, multi-type of edge computing nodes provide heterogeneous resources for vehicular services.When offloading the complicated service to the vehicular edge computing node, the decision should consider numerous factors.The offloading decision work mostly formulate the decision to a resource scheduling problem with single or multiple objective function and some constraints, and explore customized heuristics algorithms. However, offloading multiple data dependency tasks in a service is a difficult decision, as an optimal solution must understand the resource requirement, the access network, the user mobility, and importantly the data dependency. Inspired by recent advances in machine learning, we propose a knowledge driven (KD) service offloading decision framework for Vehicle of Internet, which provides the optimal policy directly from the environment. We formulate the offloading decision of multi-task in a service as a long-term planning problem, and explores the recent deep reinforcement learning to obtain the optimal solution. It considers the future data dependency of the following tasks when making decision for a current task from the learned offloading knowledge. Moreover, the framework supports the pre-training at the powerful edge computing node and continually online learning when the vehicular service is executed, so that it can adapt the environment changes and learns policy that are sensible in hindsight. The simulation results show that KD service offloading decision converges quickly, adapts to different conditions, and outperforms the greedy offloading decision algorithm.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
47
References
16
Citations
NaN
KQI