Resource Allocation for Edge Computing in IoT Networks via Reinforcement Learning.
2019
In this paper, we consider resource allocation for edge computing in internet of things (IoT) networks. Specifically, each end device is considered as an agent, which makes its decisions on whether offloading the computation tasks to the edge devices or not. To minimize the long-term weighted sum cost which includes the power consumption and the task execution latency, we consider the channel conditions between the end devices and the gateway, the computation task queue as well as the remaining computation resource of the end devices as the network states. The problem of making a series of decisions at the end devices is modelled as a Markov decision process and solved by the reinforcement learning approach. Therefore, we propose a near optimal task offloading algorithm based on ∊-greedy Q-learning. Simulations validate the feasibility of our proposed algorithm, which achieves a better trade-off between the power consumption and the task execution latency compared to these of edge computing and local computing modes.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
19
References
28
Citations
NaN
KQI