An intelligent scheme for congestion control: When active queue management meets deep reinforcement learning

2021 
Abstract With the explosive growth of data transmission requirements, the massive bursty traffic results in more frequent and serious network congestion. To deal with the burst traffic, the buffer size of the network is designed to be much larger than required. However, increasing buffer size leads to bufferbloat problem, which exacerbates network congestion. To address this problem, Active Queue Management (AQM) is proposed to cooperate with Transmission Control Protocol (TCP) congestion control mechanism. But it is hard to model dynamic AQM/TCP system and tune the parameters of conventional AQM schemes to obtain good performance under different congestion scenarios. In this paper, we aim to study the AQM scheme from a whole new perspective by leveraging emerging Deep Reinforcement Learning (DRL). A model-free approach, DRL-AQM, enables the AQM to learn the best dropping policy as human beings learn skills. DRL-AQM can capture intricate patterns in the data traffic model after trained in a simple network scenario, and leverage this to improve performance in a great variety of scenarios. There are two phases in our scheme, offline training phase and deployment phase. Once the model is trained, it can work well without any parameter-tuning in many scenarios. Experimental results show that (1) DRL-AQM outperforms conventional AQM algorithms in a variety of complex network scenarios and it is robust and insensitive to network conditions. (2) DRL-AQM keeps persistently low buffer capacity utilization without over dropping and damaging throughput. (3) DRL-AQM adapts automatically and continuously to the dynamics of the network links.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []