Parallel Computing of Spatio-Temporal Model Based on Deep Reinforcement Learning.

2021 
Deep learning parallel plays an important role in accelerating model training and improving prediction accuracy. In order to fully consider the authenticity of the simulation application scenario of model, the development of deep learning model is becoming more complex and deeper. However, a more complex and deeper model requires a larger amount of computation compared to common spatio-temporal model. In order to speed up the calculation speed and accuracy of the deep learning model, this work optimizes the common spatial-temporal model in deep learning from three aspects: data parallel, model parallel and gradient accumulation algorithm. Firstly, the data parallel slicing algorithm proposed in this work achieves parallel GPUs load balancing. Secondly, this work independently parallelizes the components of the deep spatio-temporal. Finally, this work proposes a gradient accumulation algorithm based on deep reinforcement learning. This work uses two data sets (GeoLife and Chengdu Taxi) to train and evaluate multiple parallel modes. The parallel mode combining data parallel and gradient accumulation algorithm is determined. The experimental effect has been greatly improved compared with the original model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    2
    Citations
    NaN
    KQI
    []