Reinforcement learning for joint radio resource management in LTE-UMTS scenarios

2011 
The limited availability of frequency bands and their capacity limitations, together with the constantly increasing demand for high-bit-rate services in wireless communication systems, require the use of smart radio resource management strategies to ensure that different services are provided with the required quality of service (QoS) and that the available radio resources are used efficiently. In addition, the evolution of technology toward higher spectral efficiency has led to the introduction of Orthogonal Frequency-Division Multiple Access (OFDMA) by 3GPP for use in future long-term evolution (LTE) systems. However, given the current penetration of legacy technologies such as Universal Mobile Telecommunications System (UMTS), operators will face some periods in which both Radio Access Technologies (RATs) coexist. In this context, Joint Radio Resource Management (JRRM) mechanisms are helpful because they enable complementarities between different RATs to be exploited and thus facilitate more efficient use of available radio resources. This paper proposes a novel dynamic JRRM algorithm for LTE-UMTS coexistence scenarios based on Reinforcement Learning (RL), which is considered to be a good candidate for achieving the desired degree of flexibility and adaptability in future reconfigurable networks. The proposed algorithm is evaluated in dynamic environments under different load conditions and is compared with various baseline solutions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    20
    Citations
    NaN
    KQI
    []