Q-Learning Aided Resource Allocation and Environment Recognition in LoRaWAN with CSMA/CA
2019
The mutual interference among wireless nodes is a critical factor in the Internet-of-Things (IoT) era due to its dense deployment. Due to its large coverage area, wireless nodes may not be able to detect the on-going communication of other nodes in a long range wide area network (LoRaWAN), which is one of the low power wide area (LPWA) standards. This results in packet collision. The packet collision among LoRaWAN nodes significantly deteriorates network performance functions such as packet delivery rate (PDR). Furthermore, if packet collision happens, LoRaWAN nodes must retransmit packets, draining their limited battery power. Thus, mutual interference management among LoRaWAN nodes is important from the perspectives of both network performance and network lifetime. However, due to its large network size, it is difficult to explicitly comprehend the wireless channel environment around each LoRaWAN node, such as the relation among other LoRaWAN nodes. Thus, in this paper, we utilize the powerful machine learning technique. The wireless environment around LoRaWAN nodes are learned, and the knowledge is utilized for resource allocation in order to improve PDR performance. In the proposed method, Q-learning is adopted in a LoRaWAN system, and the weighted sum of the number of successfully received packets is treated as a Q-reward. The gateway (GW) allocates resources to maximize this Q-reward. The numerical results considering LoRaWAN elucidate that the proposed scheme can improve average PDR performance by about 20% compared to the random resource allocation scheme.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
5
References
16
Citations
NaN
KQI