Learning to make auto-scaling decisions with heterogeneous spot and on-demand instances via reinforcement learning

2022 
Designing auto-scaling frameworks using spot and on-demand instances while considering their heterogeneity, can help Software-as-a-Service (SaaS) providers provide services with high availability to meet and achieve significant cost savings. However, designing such an auto-scaling framework is difficult due to the lack of prior knowledge of the cloud. In this work, we propose an algorithm called SpotRL to solve the auto-scaling problem using heterogeneous spot and on-demand instances. Reinforcement learning (RL) approaches have been shown to be able to make effective decisions in highly dynamic environments, as they can learn step-by-step and find solutions without prior knowledge. SpotRL uses an RL-based approach for the scaling of heterogeneous spot instances. In the complex cloud environment, the training speed of RL agents is generally slow. Considering this issue, we use a multi-agent approach to decompose tasks to help agents learn faster. To reduce the negative impact of low service availability due to agents’ random explorations as they interact with the cloud environment, SpotRL uses a passive approach for the scaling of heterogeneous on-demand instances. Our experimental results show that the SpotRL approach can significantly reduce the deployment cost of SaaS providers while complying with high service availability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []