Distributed computation offloading method based on deep reinforcement learning in ICV
2021
Abstract With the rapid development of Intelligent Connected Vehicles (ICVs), more effective computation resources optimization schemes in task scheduling are exactly required for large-scale network implementation. We observe that an offloading scheme that almost all tasks are going to be executed in Multi-Access Edge Computing (MEC) servers, which lead to a lot of vehicle resources to be underutilized and put a great burden on severs, is not a good solution for resource utilization. So we first consider the scenario where MEC is not available or enough. We take surrounding vehicles as a Resource Pool (RP). And we propose a distributed computation offloading method to utilize all resources, in which a complex task can be split into many small sub-tasks. How to assign these minor tasks to get a better execution time in RP is a hard problem. The executing time of a complex computing task is a min–max problem. In this paper, a distributed computation offloading strategy based on Deep Q-learning Network (DQN) is proposed to find the best offloading method to minimize the execution time of a compound task. We can demonstrate that the model proposed in this paper can take full advantage of the computing resources of the surrounding vehicles and greatly reduce the execution time of the computation tasks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
33
References
4
Citations
NaN
KQI