A reinforcement learning-based approach for availability-aware service function chain placement in large-scale networks

2022 
By decoupling network service functions from custom-built hardware, network function virtualization is transforming network service provisioning, making large scale telecommunications more flexible, scalable, and agile. Service function chaining often requires network traffic to flow through a specific sequence of virtual network functions (VNFs) to comply with network policies. The dependencies between VNFs in such a chain can significantly and negatively impact the availability of an end-to-end service function chain (SFC) when one or more VNFs malfunction or fail. While redundancy strategies can be used to reduce the impact of such failure, this can increase operational expenditure and energy consumption. In this paper, we propose a solution for SFC placement based on reinforcement learning (RL) taking into account SFC availability, operational costs, and energy consumption. The Cand-RL algorithm uses RL based on proximal policy optimization (PPO) to select the suitable candidate node and define the redundancy strategy to meet availability requirements. We compare Cand-RL against two greedy algorithms in a variety of simulated scenarios. The results show that the Cand-RL outperforms the greedy algorithms, achieving a higher acceptance rate and a good balance between availability, placement cost, and energy consumption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []