A Reinforcement Learning Framework for Optimizing Throughput in DOCSIS Networks

2021 
The capacity in a communication network is restricted by the famous Shannon-Hartley theorem, which establishes a relationship between maximum achievable capacity, channel bandwidth, and signal-to-noise ratio of the channel. The state-of-the-art in pushing the achievable capacity close to the theoretical limit revolves around coming up with ever more efficient error correction algorithms combined with assigning the proper modulation and encoding scheme to match the conditions of the spectrum at any given point in time. In cable broadband networks, which operate under the DOCSIS protocol, a Profile Management Application (PMA) system uses telemetry collected from cable modems and cable modem termination systems (CMTSs) to dynamically assign DOCSIS profiles that constitute a combination of Forward Error Correction (FEC) configuration, a Quadrature Amplitude Modulation (QAM) level, and other protocol-based configurations. The objective behind this dynamic assignment is twofold: maximizing capacity and keeping the uncorrectable error rate at a minimal level. The current PMA implementation, adopts a rule-based approach, where pre-defined thresholds govern the decisions for adjusting the profiles. This approach, while proven to be successful, limits opportunities to fully realize optimal DOCSIS configurations to bring system performance closer to the Shannon limit. Through a reinforcement learning (RL) implementation of PMA, it is possible to substitute the pre-defined rules for a system that learns to select the optimal configuration at each decision point, based on past outcomes and potential future rewards. In this paper, we focus on designing an RL-based PMA system to manage DOCSIS 3.0 upstream configurations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    1
    References
    0
    Citations
    NaN
    KQI
    []