Representation and Reinforcement Learning for Task Scheduling in Edge Computing

2020 
Recently, many deep reinforcement learning (DRL)-based task scheduling algorithms have been widely used in edge computing (EC) to reduce energy consumption. Unlike the existing algorithms considering fixed and fewer edge nodes (servers) and tasks, in this paper, a representation model with a DRL based algorithm is proposed to adapt the dynamic change of nodes and tasks and solve the dimensional disaster in DRL caused by a massive scale. Specifically, 1) we apply the representation learning models to describe the different nodes and tasks in EC, i.e., nodes and tasks are mapped to corresponding vector sub-spaces to reduce the dimensions and store the vector space efficiently. 2) With the space after dimensionality reduction, a DRL-based algorithm is employed to learn the vector representations of nodes and tasks and make scheduling decisions. 3) The experiments are conducted with the real-world data set, and the results show that the proposed representation model with DRL-based algorithm outperforms the baselines 18.04% and 9.94% on average regarding energy consumption and service level agreement violation (SLAV), respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []