RL-OPRA: Reinforcement Learning for Online and Proactive Resource Allocation of crowdsourced live videos
2020
Abstract With the advancement of rich media generating devices, the proliferation of live Content Providers (CP), and the availability of convenient internet access, crowdsourced live streaming services have witnessed unexpected growth. To ensure a better Quality of Experience (QoE), higher availability, and lower costs, large live streaming CPs are migrating their services to geo-distributed cloud infrastructure. However, because of the dynamics of live broadcasting and the wide geo-distribution of viewers and broadcasters, it is still challenging to satisfy all requests with reasonable resources. To overcome this challenge, we introduce in this paper a prediction driven approach that estimates the potential number of viewers near different cloud sites at the instant of broadcasting. This online and instant prediction of distributed popularity distinguishes our work from previous efforts that provision constant resources or alter their allocation as the popularity of the content changes. Based on the derived predictions, we formulate an Integer-Linear Program (ILP) to proactively and dynamically choose the right data center to allocate exact resources and serve potential viewers, while minimizing the perceived delays. As the optimization is not adequate for online serving, we propose a real-time approach based on Reinforcement Learning (RL), namely RL-OPRA, which adaptively learns to optimize the allocation and serving decisions by interacting with the network environment. Extensive simulation and comparison with the ILP have shown that our RL-based approach is able to present optimal results compared to heuristic-based approaches.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
30
References
4
Citations
NaN
KQI