Deep Learning Based Resources Allocation for Internet-of-Things Deployment Underlaying Cellular Networks

2020 
Resources allocation (RA) is a challenging task in many fields and applications including communications and computer networks. The conventional solutions of such problems usually come with a time and memory cost, especially for massive networks such as Internet-of-Things (IoT) networks. In this paper, two RA deep network models are proposed for enabling a clustered underlay IoT deployment, where a group of IoT nodes are uploading information to a centralized gateway in their vicinity by reusing the communication channels of conventional cellular users. The RA problem is formulated as a two-dimensional matching problem, which can be expressed as a traditional linear sum assignment problem (LSAP). The two proposed models are based on the recurrent neural network (RNN). Specifically, we investigate the performance of two long short-term memory (LSTM) based architectures. The results show that the proposed techniques could be used as replacement of the well-known Hungarian algorithm for solving LSAPs due to its ability to find the solution for the problems with different sizes, high accuracy, and very fast execution time. Additionally, the results show that the obtained accuracy outperforms the state-of-the-art deep network techniques.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    1
    Citations
    NaN
    KQI
    []